00:00:00.000 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 106 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3284 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.043 The recommended git tool is: git 00:00:00.043 using credential 00000000-0000-0000-0000-000000000002 00:00:00.047 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.058 Fetching changes from the remote Git repository 00:00:00.061 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.080 Using shallow fetch with depth 1 00:00:00.080 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.080 > git --version # timeout=10 00:00:00.109 > git --version # 'git version 2.39.2' 00:00:00.109 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.145 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.145 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.308 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.319 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.343 Checking out Revision 1c6ed56008363df82da0fcec030d6d5a1f7bd340 (FETCH_HEAD) 00:00:03.343 > git config core.sparsecheckout # timeout=10 00:00:03.358 > git read-tree -mu HEAD # timeout=10 00:00:03.374 > git checkout -f 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=5 00:00:03.394 Commit message: "spdk-abi-per-patch: pass revision to subbuild" 00:00:03.395 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:03.495 [Pipeline] Start of Pipeline 00:00:03.510 [Pipeline] library 00:00:03.512 Loading library shm_lib@master 00:00:03.512 Library shm_lib@master is cached. Copying from home. 00:00:03.526 [Pipeline] node 00:00:03.533 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.535 [Pipeline] { 00:00:03.543 [Pipeline] catchError 00:00:03.544 [Pipeline] { 00:00:03.554 [Pipeline] wrap 00:00:03.561 [Pipeline] { 00:00:03.567 [Pipeline] stage 00:00:03.568 [Pipeline] { (Prologue) 00:00:03.720 [Pipeline] sh 00:00:03.996 + logger -p user.info -t JENKINS-CI 00:00:04.011 [Pipeline] echo 00:00:04.012 Node: GP11 00:00:04.017 [Pipeline] sh 00:00:04.304 [Pipeline] setCustomBuildProperty 00:00:04.314 [Pipeline] echo 00:00:04.315 Cleanup processes 00:00:04.320 [Pipeline] sh 00:00:04.595 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.595 707977 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.605 [Pipeline] sh 00:00:04.879 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.879 ++ grep -v 'sudo pgrep' 00:00:04.879 ++ awk '{print $1}' 00:00:04.879 + sudo kill -9 00:00:04.879 + true 00:00:04.894 [Pipeline] cleanWs 00:00:04.903 [WS-CLEANUP] Deleting project workspace... 00:00:04.903 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.908 [WS-CLEANUP] done 00:00:04.912 [Pipeline] setCustomBuildProperty 00:00:04.923 [Pipeline] sh 00:00:05.212 + sudo git config --global --replace-all safe.directory '*' 00:00:05.294 [Pipeline] httpRequest 00:00:05.311 [Pipeline] echo 00:00:05.312 Sorcerer 10.211.164.101 is alive 00:00:05.321 [Pipeline] httpRequest 00:00:05.325 HttpMethod: GET 00:00:05.326 URL: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:05.326 Sending request to url: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:05.337 Response Code: HTTP/1.1 200 OK 00:00:05.337 Success: Status code 200 is in the accepted range: 200,404 00:00:05.337 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:10.379 [Pipeline] sh 00:00:10.659 + tar --no-same-owner -xf jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:10.684 [Pipeline] httpRequest 00:00:10.709 [Pipeline] echo 00:00:10.711 Sorcerer 10.211.164.101 is alive 00:00:10.719 [Pipeline] httpRequest 00:00:10.724 HttpMethod: GET 00:00:10.724 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:10.725 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:10.749 Response Code: HTTP/1.1 200 OK 00:00:10.749 Success: Status code 200 is in the accepted range: 200,404 00:00:10.750 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:58.805 [Pipeline] sh 00:00:59.086 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:02.379 [Pipeline] sh 00:01:02.664 + git -C spdk log --oneline -n5 00:01:02.664 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:02.664 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:02.664 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:02.664 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:02.664 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:02.712 [Pipeline] withCredentials 00:01:02.722 > git --version # timeout=10 00:01:02.732 > git --version # 'git version 2.39.2' 00:01:02.747 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:02.750 [Pipeline] { 00:01:02.759 [Pipeline] retry 00:01:02.761 [Pipeline] { 00:01:02.779 [Pipeline] sh 00:01:03.057 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:03.325 [Pipeline] } 00:01:03.349 [Pipeline] // retry 00:01:03.354 [Pipeline] } 00:01:03.373 [Pipeline] // withCredentials 00:01:03.382 [Pipeline] httpRequest 00:01:03.398 [Pipeline] echo 00:01:03.399 Sorcerer 10.211.164.101 is alive 00:01:03.408 [Pipeline] httpRequest 00:01:03.413 HttpMethod: GET 00:01:03.413 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:03.414 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:03.414 Response Code: HTTP/1.1 200 OK 00:01:03.415 Success: Status code 200 is in the accepted range: 200,404 00:01:03.415 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:08.266 [Pipeline] sh 00:01:08.544 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:10.451 [Pipeline] sh 00:01:10.729 + git -C dpdk log --oneline -n5 00:01:10.729 caf0f5d395 version: 22.11.4 00:01:10.729 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:10.729 dc9c799c7d vhost: fix missing spinlock unlock 00:01:10.729 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:10.729 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:10.739 [Pipeline] } 00:01:10.756 [Pipeline] // stage 00:01:10.764 [Pipeline] stage 00:01:10.766 [Pipeline] { (Prepare) 00:01:10.788 [Pipeline] writeFile 00:01:10.805 [Pipeline] sh 00:01:11.083 + logger -p user.info -t JENKINS-CI 00:01:11.096 [Pipeline] sh 00:01:11.388 + logger -p user.info -t JENKINS-CI 00:01:11.400 [Pipeline] sh 00:01:11.716 + cat autorun-spdk.conf 00:01:11.716 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.716 SPDK_TEST_NVMF=1 00:01:11.716 SPDK_TEST_NVME_CLI=1 00:01:11.716 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.716 SPDK_TEST_NVMF_NICS=e810 00:01:11.716 SPDK_TEST_VFIOUSER=1 00:01:11.716 SPDK_RUN_UBSAN=1 00:01:11.716 NET_TYPE=phy 00:01:11.716 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:11.716 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:11.722 RUN_NIGHTLY=1 00:01:11.727 [Pipeline] readFile 00:01:11.755 [Pipeline] withEnv 00:01:11.757 [Pipeline] { 00:01:11.774 [Pipeline] sh 00:01:12.074 + set -ex 00:01:12.074 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:12.074 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.074 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.074 ++ SPDK_TEST_NVMF=1 00:01:12.074 ++ SPDK_TEST_NVME_CLI=1 00:01:12.074 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.074 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.074 ++ SPDK_TEST_VFIOUSER=1 00:01:12.074 ++ SPDK_RUN_UBSAN=1 00:01:12.074 ++ NET_TYPE=phy 00:01:12.074 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:12.074 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:12.074 ++ RUN_NIGHTLY=1 00:01:12.074 + case $SPDK_TEST_NVMF_NICS in 00:01:12.074 + DRIVERS=ice 00:01:12.074 + [[ tcp == \r\d\m\a ]] 00:01:12.074 + [[ -n ice ]] 00:01:12.074 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:12.074 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:12.074 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:12.074 rmmod: ERROR: Module irdma is not currently loaded 00:01:12.074 rmmod: ERROR: Module i40iw is not currently loaded 00:01:12.074 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:12.074 + true 00:01:12.074 + for D in $DRIVERS 00:01:12.074 + sudo modprobe ice 00:01:12.074 + exit 0 00:01:12.087 [Pipeline] } 00:01:12.111 [Pipeline] // withEnv 00:01:12.116 [Pipeline] } 00:01:12.138 [Pipeline] // stage 00:01:12.149 [Pipeline] catchError 00:01:12.151 [Pipeline] { 00:01:12.170 [Pipeline] timeout 00:01:12.170 Timeout set to expire in 50 min 00:01:12.172 [Pipeline] { 00:01:12.192 [Pipeline] stage 00:01:12.194 [Pipeline] { (Tests) 00:01:12.211 [Pipeline] sh 00:01:12.489 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.489 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.489 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.489 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:12.489 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.489 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.489 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:12.489 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.489 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.489 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.489 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:12.489 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.489 + source /etc/os-release 00:01:12.489 ++ NAME='Fedora Linux' 00:01:12.489 ++ VERSION='38 (Cloud Edition)' 00:01:12.489 ++ ID=fedora 00:01:12.489 ++ VERSION_ID=38 00:01:12.489 ++ VERSION_CODENAME= 00:01:12.489 ++ PLATFORM_ID=platform:f38 00:01:12.489 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:12.489 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:12.489 ++ LOGO=fedora-logo-icon 00:01:12.489 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:12.489 ++ HOME_URL=https://fedoraproject.org/ 00:01:12.489 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:12.489 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:12.489 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:12.489 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:12.489 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:12.489 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:12.489 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:12.489 ++ SUPPORT_END=2024-05-14 00:01:12.489 ++ VARIANT='Cloud Edition' 00:01:12.489 ++ VARIANT_ID=cloud 00:01:12.489 + uname -a 00:01:12.489 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:12.489 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:13.420 Hugepages 00:01:13.420 node hugesize free / total 00:01:13.420 node0 1048576kB 0 / 0 00:01:13.420 node0 2048kB 0 / 0 00:01:13.420 node1 1048576kB 0 / 0 00:01:13.420 node1 2048kB 0 / 0 00:01:13.420 00:01:13.420 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.420 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:13.420 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:13.420 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:13.420 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:13.420 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:13.420 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:13.420 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:13.420 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:13.420 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:13.420 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:13.420 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:13.420 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:13.420 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:13.420 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:13.420 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:13.420 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:13.420 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:13.420 + rm -f /tmp/spdk-ld-path 00:01:13.420 + source autorun-spdk.conf 00:01:13.420 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.420 ++ SPDK_TEST_NVMF=1 00:01:13.420 ++ SPDK_TEST_NVME_CLI=1 00:01:13.420 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.420 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.420 ++ SPDK_TEST_VFIOUSER=1 00:01:13.420 ++ SPDK_RUN_UBSAN=1 00:01:13.420 ++ NET_TYPE=phy 00:01:13.420 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:13.420 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:13.420 ++ RUN_NIGHTLY=1 00:01:13.420 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:13.420 + [[ -n '' ]] 00:01:13.420 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.679 + for M in /var/spdk/build-*-manifest.txt 00:01:13.679 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:13.679 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.679 + for M in /var/spdk/build-*-manifest.txt 00:01:13.679 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:13.679 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.679 ++ uname 00:01:13.679 + [[ Linux == \L\i\n\u\x ]] 00:01:13.679 + sudo dmesg -T 00:01:13.679 + sudo dmesg --clear 00:01:13.679 + dmesg_pid=709297 00:01:13.679 + [[ Fedora Linux == FreeBSD ]] 00:01:13.679 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.679 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.679 + sudo dmesg -Tw 00:01:13.679 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:13.679 + [[ -x /usr/src/fio-static/fio ]] 00:01:13.679 + export FIO_BIN=/usr/src/fio-static/fio 00:01:13.679 + FIO_BIN=/usr/src/fio-static/fio 00:01:13.679 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:13.679 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:13.680 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:13.680 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.680 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.680 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:13.680 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.680 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.680 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.680 Test configuration: 00:01:13.680 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.680 SPDK_TEST_NVMF=1 00:01:13.680 SPDK_TEST_NVME_CLI=1 00:01:13.680 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.680 SPDK_TEST_NVMF_NICS=e810 00:01:13.680 SPDK_TEST_VFIOUSER=1 00:01:13.680 SPDK_RUN_UBSAN=1 00:01:13.680 NET_TYPE=phy 00:01:13.680 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:13.680 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:13.680 RUN_NIGHTLY=1 17:36:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:13.680 17:36:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.680 17:36:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.680 17:36:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.680 17:36:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.680 17:36:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.680 17:36:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.680 17:36:48 -- paths/export.sh@5 -- $ export PATH 00:01:13.680 17:36:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.680 17:36:48 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:13.680 17:36:48 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:13.680 17:36:48 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721489808.XXXXXX 00:01:13.680 17:36:48 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721489808.hBIrZa 00:01:13.680 17:36:48 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:13.680 17:36:48 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:01:13.680 17:36:48 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:13.680 17:36:48 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:13.680 17:36:48 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.680 17:36:48 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.680 17:36:48 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:13.680 17:36:48 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:13.680 17:36:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.680 17:36:48 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:13.680 17:36:48 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:13.680 17:36:48 -- pm/common@17 -- $ local monitor 00:01:13.680 17:36:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.680 17:36:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.680 17:36:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.680 17:36:48 -- pm/common@21 -- $ date +%s 00:01:13.680 17:36:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.680 17:36:48 -- pm/common@21 -- $ date +%s 00:01:13.680 17:36:48 -- pm/common@25 -- $ sleep 1 00:01:13.680 17:36:48 -- pm/common@21 -- $ date +%s 00:01:13.680 17:36:48 -- pm/common@21 -- $ date +%s 00:01:13.680 17:36:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721489808 00:01:13.680 17:36:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721489808 00:01:13.680 17:36:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721489808 00:01:13.680 17:36:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721489808 00:01:13.680 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721489808_collect-vmstat.pm.log 00:01:13.680 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721489808_collect-cpu-load.pm.log 00:01:13.680 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721489808_collect-cpu-temp.pm.log 00:01:13.680 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721489808_collect-bmc-pm.bmc.pm.log 00:01:14.613 17:36:49 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:14.613 17:36:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.613 17:36:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.613 17:36:49 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.613 17:36:49 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.613 Sat Jul 20 03:36:49 PM UTC 2024 00:01:14.613 17:36:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.613 v24.05-13-g5fa2f5086 00:01:14.613 17:36:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:14.613 17:36:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.613 17:36:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.613 17:36:49 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:14.613 17:36:49 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:14.613 17:36:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.613 ************************************ 00:01:14.613 START TEST ubsan 00:01:14.613 ************************************ 00:01:14.613 17:36:49 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:14.613 using ubsan 00:01:14.613 00:01:14.613 real 0m0.000s 00:01:14.613 user 0m0.000s 00:01:14.613 sys 0m0.000s 00:01:14.613 17:36:49 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:14.613 17:36:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.613 ************************************ 00:01:14.613 END TEST ubsan 00:01:14.613 ************************************ 00:01:14.872 17:36:49 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:14.872 17:36:49 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:14.872 17:36:49 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:14.872 17:36:49 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:14.872 17:36:49 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:14.872 17:36:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.872 ************************************ 00:01:14.872 START TEST build_native_dpdk 00:01:14.872 ************************************ 00:01:14.872 17:36:49 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:14.872 caf0f5d395 version: 22.11.4 00:01:14.872 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:14.872 dc9c799c7d vhost: fix missing spinlock unlock 00:01:14.872 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:14.872 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:14.872 17:36:49 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:14.873 17:36:49 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:14.873 patching file config/rte_config.h 00:01:14.873 Hunk #1 succeeded at 60 (offset 1 line). 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:14.873 17:36:49 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:19.060 The Meson build system 00:01:19.060 Version: 1.3.1 00:01:19.060 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:19.060 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:19.060 Build type: native build 00:01:19.060 Program cat found: YES (/usr/bin/cat) 00:01:19.060 Project name: DPDK 00:01:19.060 Project version: 22.11.4 00:01:19.060 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:19.060 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:19.060 Host machine cpu family: x86_64 00:01:19.060 Host machine cpu: x86_64 00:01:19.060 Message: ## Building in Developer Mode ## 00:01:19.060 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:19.060 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:19.060 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:19.060 Program objdump found: YES (/usr/bin/objdump) 00:01:19.060 Program python3 found: YES (/usr/bin/python3) 00:01:19.060 Program cat found: YES (/usr/bin/cat) 00:01:19.060 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:19.060 Checking for size of "void *" : 8 00:01:19.060 Checking for size of "void *" : 8 (cached) 00:01:19.060 Library m found: YES 00:01:19.060 Library numa found: YES 00:01:19.060 Has header "numaif.h" : YES 00:01:19.060 Library fdt found: NO 00:01:19.060 Library execinfo found: NO 00:01:19.060 Has header "execinfo.h" : YES 00:01:19.060 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:19.060 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:19.060 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:19.060 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:19.060 Run-time dependency openssl found: YES 3.0.9 00:01:19.060 Run-time dependency libpcap found: YES 1.10.4 00:01:19.060 Has header "pcap.h" with dependency libpcap: YES 00:01:19.060 Compiler for C supports arguments -Wcast-qual: YES 00:01:19.060 Compiler for C supports arguments -Wdeprecated: YES 00:01:19.060 Compiler for C supports arguments -Wformat: YES 00:01:19.060 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:19.060 Compiler for C supports arguments -Wformat-security: NO 00:01:19.060 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:19.060 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:19.060 Compiler for C supports arguments -Wnested-externs: YES 00:01:19.060 Compiler for C supports arguments -Wold-style-definition: YES 00:01:19.060 Compiler for C supports arguments -Wpointer-arith: YES 00:01:19.060 Compiler for C supports arguments -Wsign-compare: YES 00:01:19.060 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:19.060 Compiler for C supports arguments -Wundef: YES 00:01:19.060 Compiler for C supports arguments -Wwrite-strings: YES 00:01:19.060 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:19.060 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:19.060 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:19.060 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:19.060 Compiler for C supports arguments -mavx512f: YES 00:01:19.060 Checking if "AVX512 checking" compiles: YES 00:01:19.060 Fetching value of define "__SSE4_2__" : 1 00:01:19.060 Fetching value of define "__AES__" : 1 00:01:19.060 Fetching value of define "__AVX__" : 1 00:01:19.060 Fetching value of define "__AVX2__" : (undefined) 00:01:19.060 Fetching value of define "__AVX512BW__" : (undefined) 00:01:19.060 Fetching value of define "__AVX512CD__" : (undefined) 00:01:19.060 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:19.060 Fetching value of define "__AVX512F__" : (undefined) 00:01:19.060 Fetching value of define "__AVX512VL__" : (undefined) 00:01:19.060 Fetching value of define "__PCLMUL__" : 1 00:01:19.060 Fetching value of define "__RDRND__" : 1 00:01:19.060 Fetching value of define "__RDSEED__" : (undefined) 00:01:19.060 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:19.060 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:19.060 Message: lib/kvargs: Defining dependency "kvargs" 00:01:19.060 Message: lib/telemetry: Defining dependency "telemetry" 00:01:19.060 Checking for function "getentropy" : YES 00:01:19.060 Message: lib/eal: Defining dependency "eal" 00:01:19.060 Message: lib/ring: Defining dependency "ring" 00:01:19.060 Message: lib/rcu: Defining dependency "rcu" 00:01:19.060 Message: lib/mempool: Defining dependency "mempool" 00:01:19.060 Message: lib/mbuf: Defining dependency "mbuf" 00:01:19.060 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:19.060 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:19.060 Compiler for C supports arguments -mpclmul: YES 00:01:19.060 Compiler for C supports arguments -maes: YES 00:01:19.060 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:19.060 Compiler for C supports arguments -mavx512bw: YES 00:01:19.060 Compiler for C supports arguments -mavx512dq: YES 00:01:19.060 Compiler for C supports arguments -mavx512vl: YES 00:01:19.060 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:19.060 Compiler for C supports arguments -mavx2: YES 00:01:19.060 Compiler for C supports arguments -mavx: YES 00:01:19.060 Message: lib/net: Defining dependency "net" 00:01:19.060 Message: lib/meter: Defining dependency "meter" 00:01:19.060 Message: lib/ethdev: Defining dependency "ethdev" 00:01:19.060 Message: lib/pci: Defining dependency "pci" 00:01:19.060 Message: lib/cmdline: Defining dependency "cmdline" 00:01:19.060 Message: lib/metrics: Defining dependency "metrics" 00:01:19.060 Message: lib/hash: Defining dependency "hash" 00:01:19.060 Message: lib/timer: Defining dependency "timer" 00:01:19.060 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:19.060 Compiler for C supports arguments -mavx2: YES (cached) 00:01:19.060 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:19.060 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:19.060 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:19.060 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:19.060 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:19.060 Message: lib/acl: Defining dependency "acl" 00:01:19.060 Message: lib/bbdev: Defining dependency "bbdev" 00:01:19.060 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:19.060 Run-time dependency libelf found: YES 0.190 00:01:19.060 Message: lib/bpf: Defining dependency "bpf" 00:01:19.060 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:19.060 Message: lib/compressdev: Defining dependency "compressdev" 00:01:19.060 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:19.060 Message: lib/distributor: Defining dependency "distributor" 00:01:19.060 Message: lib/efd: Defining dependency "efd" 00:01:19.060 Message: lib/eventdev: Defining dependency "eventdev" 00:01:19.060 Message: lib/gpudev: Defining dependency "gpudev" 00:01:19.060 Message: lib/gro: Defining dependency "gro" 00:01:19.060 Message: lib/gso: Defining dependency "gso" 00:01:19.060 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:19.060 Message: lib/jobstats: Defining dependency "jobstats" 00:01:19.060 Message: lib/latencystats: Defining dependency "latencystats" 00:01:19.060 Message: lib/lpm: Defining dependency "lpm" 00:01:19.060 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:19.060 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:19.060 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:19.060 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:19.060 Message: lib/member: Defining dependency "member" 00:01:19.060 Message: lib/pcapng: Defining dependency "pcapng" 00:01:19.060 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:19.060 Message: lib/power: Defining dependency "power" 00:01:19.060 Message: lib/rawdev: Defining dependency "rawdev" 00:01:19.060 Message: lib/regexdev: Defining dependency "regexdev" 00:01:19.060 Message: lib/dmadev: Defining dependency "dmadev" 00:01:19.060 Message: lib/rib: Defining dependency "rib" 00:01:19.060 Message: lib/reorder: Defining dependency "reorder" 00:01:19.060 Message: lib/sched: Defining dependency "sched" 00:01:19.060 Message: lib/security: Defining dependency "security" 00:01:19.060 Message: lib/stack: Defining dependency "stack" 00:01:19.060 Has header "linux/userfaultfd.h" : YES 00:01:19.060 Message: lib/vhost: Defining dependency "vhost" 00:01:19.060 Message: lib/ipsec: Defining dependency "ipsec" 00:01:19.060 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:19.060 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:19.060 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:19.060 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:19.060 Message: lib/fib: Defining dependency "fib" 00:01:19.060 Message: lib/port: Defining dependency "port" 00:01:19.060 Message: lib/pdump: Defining dependency "pdump" 00:01:19.060 Message: lib/table: Defining dependency "table" 00:01:19.060 Message: lib/pipeline: Defining dependency "pipeline" 00:01:19.060 Message: lib/graph: Defining dependency "graph" 00:01:19.060 Message: lib/node: Defining dependency "node" 00:01:19.060 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:19.060 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:19.060 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:19.060 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:19.060 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:19.060 Compiler for C supports arguments -Wno-unused-value: YES 00:01:19.997 Compiler for C supports arguments -Wno-format: YES 00:01:19.997 Compiler for C supports arguments -Wno-format-security: YES 00:01:19.997 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:19.997 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:19.997 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:19.997 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:19.997 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:19.997 Compiler for C supports arguments -mavx2: YES (cached) 00:01:19.997 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:19.997 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:19.997 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:19.997 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:19.997 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:19.997 Program doxygen found: YES (/usr/bin/doxygen) 00:01:19.997 Configuring doxy-api.conf using configuration 00:01:19.997 Program sphinx-build found: NO 00:01:19.997 Configuring rte_build_config.h using configuration 00:01:19.997 Message: 00:01:19.997 ================= 00:01:19.997 Applications Enabled 00:01:19.997 ================= 00:01:19.997 00:01:19.997 apps: 00:01:19.997 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:19.997 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:19.997 test-security-perf, 00:01:19.997 00:01:19.997 Message: 00:01:19.997 ================= 00:01:19.997 Libraries Enabled 00:01:19.997 ================= 00:01:19.997 00:01:19.997 libs: 00:01:19.997 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:19.997 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:19.997 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:19.998 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:19.998 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:19.998 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:19.998 table, pipeline, graph, node, 00:01:19.998 00:01:19.998 Message: 00:01:19.998 =============== 00:01:19.998 Drivers Enabled 00:01:19.998 =============== 00:01:19.998 00:01:19.998 common: 00:01:19.998 00:01:19.998 bus: 00:01:19.998 pci, vdev, 00:01:19.998 mempool: 00:01:19.998 ring, 00:01:19.998 dma: 00:01:19.998 00:01:19.998 net: 00:01:19.998 i40e, 00:01:19.998 raw: 00:01:19.998 00:01:19.998 crypto: 00:01:19.998 00:01:19.998 compress: 00:01:19.998 00:01:19.998 regex: 00:01:19.998 00:01:19.998 vdpa: 00:01:19.998 00:01:19.998 event: 00:01:19.998 00:01:19.998 baseband: 00:01:19.998 00:01:19.998 gpu: 00:01:19.998 00:01:19.998 00:01:19.998 Message: 00:01:19.998 ================= 00:01:19.998 Content Skipped 00:01:19.998 ================= 00:01:19.998 00:01:19.998 apps: 00:01:19.998 00:01:19.998 libs: 00:01:19.998 kni: explicitly disabled via build config (deprecated lib) 00:01:19.998 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:19.998 00:01:19.998 drivers: 00:01:19.998 common/cpt: not in enabled drivers build config 00:01:19.998 common/dpaax: not in enabled drivers build config 00:01:19.998 common/iavf: not in enabled drivers build config 00:01:19.998 common/idpf: not in enabled drivers build config 00:01:19.998 common/mvep: not in enabled drivers build config 00:01:19.998 common/octeontx: not in enabled drivers build config 00:01:19.998 bus/auxiliary: not in enabled drivers build config 00:01:19.998 bus/dpaa: not in enabled drivers build config 00:01:19.998 bus/fslmc: not in enabled drivers build config 00:01:19.998 bus/ifpga: not in enabled drivers build config 00:01:19.998 bus/vmbus: not in enabled drivers build config 00:01:19.998 common/cnxk: not in enabled drivers build config 00:01:19.998 common/mlx5: not in enabled drivers build config 00:01:19.998 common/qat: not in enabled drivers build config 00:01:19.998 common/sfc_efx: not in enabled drivers build config 00:01:19.998 mempool/bucket: not in enabled drivers build config 00:01:19.998 mempool/cnxk: not in enabled drivers build config 00:01:19.998 mempool/dpaa: not in enabled drivers build config 00:01:19.998 mempool/dpaa2: not in enabled drivers build config 00:01:19.998 mempool/octeontx: not in enabled drivers build config 00:01:19.998 mempool/stack: not in enabled drivers build config 00:01:19.998 dma/cnxk: not in enabled drivers build config 00:01:19.998 dma/dpaa: not in enabled drivers build config 00:01:19.998 dma/dpaa2: not in enabled drivers build config 00:01:19.998 dma/hisilicon: not in enabled drivers build config 00:01:19.998 dma/idxd: not in enabled drivers build config 00:01:19.998 dma/ioat: not in enabled drivers build config 00:01:19.998 dma/skeleton: not in enabled drivers build config 00:01:19.998 net/af_packet: not in enabled drivers build config 00:01:19.998 net/af_xdp: not in enabled drivers build config 00:01:19.998 net/ark: not in enabled drivers build config 00:01:19.998 net/atlantic: not in enabled drivers build config 00:01:19.998 net/avp: not in enabled drivers build config 00:01:19.998 net/axgbe: not in enabled drivers build config 00:01:19.998 net/bnx2x: not in enabled drivers build config 00:01:19.998 net/bnxt: not in enabled drivers build config 00:01:19.998 net/bonding: not in enabled drivers build config 00:01:19.998 net/cnxk: not in enabled drivers build config 00:01:19.998 net/cxgbe: not in enabled drivers build config 00:01:19.998 net/dpaa: not in enabled drivers build config 00:01:19.998 net/dpaa2: not in enabled drivers build config 00:01:19.998 net/e1000: not in enabled drivers build config 00:01:19.998 net/ena: not in enabled drivers build config 00:01:19.998 net/enetc: not in enabled drivers build config 00:01:19.998 net/enetfec: not in enabled drivers build config 00:01:19.998 net/enic: not in enabled drivers build config 00:01:19.998 net/failsafe: not in enabled drivers build config 00:01:19.998 net/fm10k: not in enabled drivers build config 00:01:19.998 net/gve: not in enabled drivers build config 00:01:19.998 net/hinic: not in enabled drivers build config 00:01:19.998 net/hns3: not in enabled drivers build config 00:01:19.998 net/iavf: not in enabled drivers build config 00:01:19.998 net/ice: not in enabled drivers build config 00:01:19.998 net/idpf: not in enabled drivers build config 00:01:19.998 net/igc: not in enabled drivers build config 00:01:19.998 net/ionic: not in enabled drivers build config 00:01:19.998 net/ipn3ke: not in enabled drivers build config 00:01:19.998 net/ixgbe: not in enabled drivers build config 00:01:19.998 net/kni: not in enabled drivers build config 00:01:19.998 net/liquidio: not in enabled drivers build config 00:01:19.998 net/mana: not in enabled drivers build config 00:01:19.998 net/memif: not in enabled drivers build config 00:01:19.998 net/mlx4: not in enabled drivers build config 00:01:19.998 net/mlx5: not in enabled drivers build config 00:01:19.998 net/mvneta: not in enabled drivers build config 00:01:19.998 net/mvpp2: not in enabled drivers build config 00:01:19.998 net/netvsc: not in enabled drivers build config 00:01:19.998 net/nfb: not in enabled drivers build config 00:01:19.998 net/nfp: not in enabled drivers build config 00:01:19.998 net/ngbe: not in enabled drivers build config 00:01:19.998 net/null: not in enabled drivers build config 00:01:19.998 net/octeontx: not in enabled drivers build config 00:01:19.998 net/octeon_ep: not in enabled drivers build config 00:01:19.998 net/pcap: not in enabled drivers build config 00:01:19.998 net/pfe: not in enabled drivers build config 00:01:19.998 net/qede: not in enabled drivers build config 00:01:19.998 net/ring: not in enabled drivers build config 00:01:19.998 net/sfc: not in enabled drivers build config 00:01:19.998 net/softnic: not in enabled drivers build config 00:01:19.998 net/tap: not in enabled drivers build config 00:01:19.998 net/thunderx: not in enabled drivers build config 00:01:19.998 net/txgbe: not in enabled drivers build config 00:01:19.998 net/vdev_netvsc: not in enabled drivers build config 00:01:19.998 net/vhost: not in enabled drivers build config 00:01:19.998 net/virtio: not in enabled drivers build config 00:01:19.998 net/vmxnet3: not in enabled drivers build config 00:01:19.998 raw/cnxk_bphy: not in enabled drivers build config 00:01:19.998 raw/cnxk_gpio: not in enabled drivers build config 00:01:19.998 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:19.998 raw/ifpga: not in enabled drivers build config 00:01:19.998 raw/ntb: not in enabled drivers build config 00:01:19.998 raw/skeleton: not in enabled drivers build config 00:01:19.998 crypto/armv8: not in enabled drivers build config 00:01:19.998 crypto/bcmfs: not in enabled drivers build config 00:01:19.998 crypto/caam_jr: not in enabled drivers build config 00:01:19.998 crypto/ccp: not in enabled drivers build config 00:01:19.998 crypto/cnxk: not in enabled drivers build config 00:01:19.998 crypto/dpaa_sec: not in enabled drivers build config 00:01:19.998 crypto/dpaa2_sec: not in enabled drivers build config 00:01:19.998 crypto/ipsec_mb: not in enabled drivers build config 00:01:19.998 crypto/mlx5: not in enabled drivers build config 00:01:19.998 crypto/mvsam: not in enabled drivers build config 00:01:19.998 crypto/nitrox: not in enabled drivers build config 00:01:19.998 crypto/null: not in enabled drivers build config 00:01:19.998 crypto/octeontx: not in enabled drivers build config 00:01:19.998 crypto/openssl: not in enabled drivers build config 00:01:19.998 crypto/scheduler: not in enabled drivers build config 00:01:19.998 crypto/uadk: not in enabled drivers build config 00:01:19.998 crypto/virtio: not in enabled drivers build config 00:01:19.998 compress/isal: not in enabled drivers build config 00:01:19.998 compress/mlx5: not in enabled drivers build config 00:01:19.998 compress/octeontx: not in enabled drivers build config 00:01:19.998 compress/zlib: not in enabled drivers build config 00:01:19.998 regex/mlx5: not in enabled drivers build config 00:01:19.998 regex/cn9k: not in enabled drivers build config 00:01:19.998 vdpa/ifc: not in enabled drivers build config 00:01:19.998 vdpa/mlx5: not in enabled drivers build config 00:01:19.998 vdpa/sfc: not in enabled drivers build config 00:01:19.998 event/cnxk: not in enabled drivers build config 00:01:19.998 event/dlb2: not in enabled drivers build config 00:01:19.998 event/dpaa: not in enabled drivers build config 00:01:19.998 event/dpaa2: not in enabled drivers build config 00:01:19.998 event/dsw: not in enabled drivers build config 00:01:19.998 event/opdl: not in enabled drivers build config 00:01:19.998 event/skeleton: not in enabled drivers build config 00:01:19.998 event/sw: not in enabled drivers build config 00:01:19.998 event/octeontx: not in enabled drivers build config 00:01:19.998 baseband/acc: not in enabled drivers build config 00:01:19.998 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:19.998 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:19.998 baseband/la12xx: not in enabled drivers build config 00:01:19.998 baseband/null: not in enabled drivers build config 00:01:19.998 baseband/turbo_sw: not in enabled drivers build config 00:01:19.998 gpu/cuda: not in enabled drivers build config 00:01:19.998 00:01:19.998 00:01:19.998 Build targets in project: 316 00:01:19.998 00:01:19.999 DPDK 22.11.4 00:01:19.999 00:01:19.999 User defined options 00:01:19.999 libdir : lib 00:01:19.999 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:19.999 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:19.999 c_link_args : 00:01:19.999 enable_docs : false 00:01:19.999 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:19.999 enable_kmods : false 00:01:19.999 machine : native 00:01:19.999 tests : false 00:01:19.999 00:01:19.999 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:19.999 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:19.999 17:36:54 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:19.999 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:19.999 [1/745] Generating lib/rte_kvargs_def with a custom command 00:01:19.999 [2/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:19.999 [3/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:19.999 [4/745] Generating lib/rte_telemetry_def with a custom command 00:01:19.999 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:19.999 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:19.999 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:19.999 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:19.999 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:19.999 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:19.999 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:19.999 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:20.264 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:20.264 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:20.264 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:20.264 [16/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:20.264 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:20.264 [18/745] Linking static target lib/librte_kvargs.a 00:01:20.264 [19/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:20.264 [20/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:20.264 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:20.264 [22/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:20.264 [23/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:20.264 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:20.264 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:20.264 [26/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:20.264 [27/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:20.264 [28/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:20.264 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:20.264 [30/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:20.264 [31/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:20.264 [32/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:20.264 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:20.264 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:20.264 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:20.264 [36/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:20.264 [37/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:20.264 [38/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:20.264 [39/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:20.264 [40/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:20.264 [41/745] Generating lib/rte_eal_def with a custom command 00:01:20.264 [42/745] Generating lib/rte_eal_mingw with a custom command 00:01:20.264 [43/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:20.264 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:20.264 [45/745] Generating lib/rte_ring_def with a custom command 00:01:20.264 [46/745] Generating lib/rte_ring_mingw with a custom command 00:01:20.264 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:20.264 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:20.264 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:20.264 [50/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:20.264 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:20.264 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:20.264 [53/745] Generating lib/rte_rcu_def with a custom command 00:01:20.264 [54/745] Generating lib/rte_rcu_mingw with a custom command 00:01:20.264 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:20.264 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:20.264 [57/745] Generating lib/rte_mempool_def with a custom command 00:01:20.264 [58/745] Generating lib/rte_mempool_mingw with a custom command 00:01:20.264 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:20.264 [60/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:20.522 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:20.522 [62/745] Generating lib/rte_mbuf_def with a custom command 00:01:20.522 [63/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:20.522 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:20.522 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:20.522 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:20.523 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:20.523 [68/745] Generating lib/rte_net_def with a custom command 00:01:20.523 [69/745] Generating lib/rte_net_mingw with a custom command 00:01:20.523 [70/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:20.523 [71/745] Generating lib/rte_meter_def with a custom command 00:01:20.523 [72/745] Generating lib/rte_meter_mingw with a custom command 00:01:20.523 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:20.523 [74/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:20.523 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:20.523 [76/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:20.523 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:20.523 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:20.523 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.523 [80/745] Linking target lib/librte_kvargs.so.23.0 00:01:20.523 [81/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:20.523 [82/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:20.523 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:20.523 [84/745] Linking static target lib/librte_ring.a 00:01:20.796 [85/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:20.796 [86/745] Linking static target lib/librte_meter.a 00:01:20.796 [87/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:20.796 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:20.796 [89/745] Generating lib/rte_pci_def with a custom command 00:01:20.796 [90/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:20.796 [91/745] Generating lib/rte_pci_mingw with a custom command 00:01:20.796 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:20.796 [93/745] Linking static target lib/librte_pci.a 00:01:20.796 [94/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:20.796 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:20.796 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:20.796 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:21.063 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:21.063 [99/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.063 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:21.063 [101/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:21.063 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:21.063 [103/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:21.063 [104/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.063 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:21.063 [106/745] Generating lib/rte_cmdline_def with a custom command 00:01:21.063 [107/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.063 [108/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:21.063 [109/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:21.063 [110/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:21.063 [111/745] Linking static target lib/librte_telemetry.a 00:01:21.063 [112/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:21.063 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:21.063 [114/745] Generating lib/rte_metrics_mingw with a custom command 00:01:21.329 [115/745] Generating lib/rte_metrics_def with a custom command 00:01:21.329 [116/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:21.329 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:21.329 [118/745] Generating lib/rte_hash_def with a custom command 00:01:21.329 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:21.329 [120/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:21.329 [121/745] Generating lib/rte_timer_def with a custom command 00:01:21.329 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:21.329 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:21.329 [124/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:21.586 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:21.587 [126/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:21.587 [127/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:21.587 [128/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:21.587 [129/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:21.587 [130/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:21.587 [131/745] Generating lib/rte_acl_def with a custom command 00:01:21.587 [132/745] Generating lib/rte_acl_mingw with a custom command 00:01:21.587 [133/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:21.587 [134/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:21.587 [135/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:21.587 [136/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:21.587 [137/745] Generating lib/rte_bbdev_def with a custom command 00:01:21.587 [138/745] Generating lib/rte_bitratestats_def with a custom command 00:01:21.587 [139/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:21.587 [140/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:21.587 [141/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:21.587 [142/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.587 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:21.587 [144/745] Linking target lib/librte_telemetry.so.23.0 00:01:21.587 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:21.846 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:21.846 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:21.846 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:21.846 [149/745] Generating lib/rte_bpf_def with a custom command 00:01:21.846 [150/745] Generating lib/rte_bpf_mingw with a custom command 00:01:21.846 [151/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:21.846 [152/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:21.846 [153/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:21.846 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:21.846 [155/745] Generating lib/rte_cfgfile_def with a custom command 00:01:21.846 [156/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:21.846 [157/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:21.846 [158/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:22.109 [159/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:22.109 [160/745] Generating lib/rte_compressdev_def with a custom command 00:01:22.109 [161/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:22.109 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:22.109 [163/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:22.109 [164/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:22.109 [165/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:22.109 [166/745] Generating lib/rte_cryptodev_def with a custom command 00:01:22.109 [167/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:22.109 [168/745] Linking static target lib/librte_rcu.a 00:01:22.109 [169/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:22.109 [170/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:22.109 [171/745] Linking static target lib/librte_timer.a 00:01:22.109 [172/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:22.109 [173/745] Generating lib/rte_distributor_def with a custom command 00:01:22.109 [174/745] Generating lib/rte_distributor_mingw with a custom command 00:01:22.109 [175/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:22.109 [176/745] Linking static target lib/librte_cmdline.a 00:01:22.109 [177/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:22.109 [178/745] Linking static target lib/librte_net.a 00:01:22.109 [179/745] Generating lib/rte_efd_def with a custom command 00:01:22.109 [180/745] Generating lib/rte_efd_mingw with a custom command 00:01:22.368 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:22.368 [182/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:22.368 [183/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:22.368 [184/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:22.368 [185/745] Linking static target lib/librte_cfgfile.a 00:01:22.368 [186/745] Linking static target lib/librte_mempool.a 00:01:22.368 [187/745] Linking static target lib/librte_metrics.a 00:01:22.639 [188/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.639 [189/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.639 [190/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:22.639 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.639 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:22.639 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:22.639 [194/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:22.639 [195/745] Generating lib/rte_eventdev_def with a custom command 00:01:22.639 [196/745] Linking static target lib/librte_eal.a 00:01:22.901 [197/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:22.901 [198/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:22.901 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:22.901 [200/745] Generating lib/rte_gpudev_def with a custom command 00:01:22.901 [201/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:22.901 [202/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:22.901 [203/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:22.901 [204/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:22.901 [205/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:22.901 [206/745] Linking static target lib/librte_bitratestats.a 00:01:22.901 [207/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.901 [208/745] Generating lib/rte_gro_def with a custom command 00:01:22.901 [209/745] Generating lib/rte_gro_mingw with a custom command 00:01:22.901 [210/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.167 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:23.167 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:23.167 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:23.167 [214/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:23.167 [215/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:23.167 [216/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.167 [217/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:23.432 [218/745] Generating lib/rte_gso_mingw with a custom command 00:01:23.432 [219/745] Generating lib/rte_gso_def with a custom command 00:01:23.432 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:23.432 [221/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:23.432 [222/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:23.432 [223/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:23.432 [224/745] Linking static target lib/librte_bbdev.a 00:01:23.432 [225/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:23.432 [226/745] Generating lib/rte_ip_frag_def with a custom command 00:01:23.432 [227/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:23.432 [228/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.432 [229/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.690 [230/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:23.690 [231/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:23.690 [232/745] Generating lib/rte_jobstats_def with a custom command 00:01:23.690 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:23.690 [234/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:23.690 [235/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:23.690 [236/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:23.690 [237/745] Linking static target lib/librte_compressdev.a 00:01:23.690 [238/745] Generating lib/rte_lpm_mingw with a custom command 00:01:23.690 [239/745] Generating lib/rte_lpm_def with a custom command 00:01:23.690 [240/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:23.690 [241/745] Linking static target lib/librte_jobstats.a 00:01:23.690 [242/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:23.952 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:23.952 [244/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:23.952 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:23.952 [246/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:23.952 [247/745] Linking static target lib/librte_distributor.a 00:01:24.218 [248/745] Generating lib/rte_member_def with a custom command 00:01:24.218 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:24.218 [250/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.218 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:24.218 [252/745] Generating lib/rte_pcapng_def with a custom command 00:01:24.218 [253/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:24.218 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:24.476 [255/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.476 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:24.476 [257/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:24.476 [258/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:24.476 [259/745] Linking static target lib/librte_bpf.a 00:01:24.476 [260/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:24.476 [261/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:24.476 [262/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:24.476 [263/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:24.476 [264/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:24.476 [265/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:24.476 [266/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.476 [267/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:24.476 [268/745] Generating lib/rte_power_mingw with a custom command 00:01:24.476 [269/745] Generating lib/rte_power_def with a custom command 00:01:24.476 [270/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:24.476 [271/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:24.476 [272/745] Linking static target lib/librte_gpudev.a 00:01:24.476 [273/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:24.476 [274/745] Linking static target lib/librte_gro.a 00:01:24.738 [275/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:24.738 [276/745] Generating lib/rte_rawdev_def with a custom command 00:01:24.738 [277/745] Generating lib/rte_regexdev_def with a custom command 00:01:24.738 [278/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:24.738 [279/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:24.738 [280/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:24.738 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:24.738 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:24.738 [283/745] Generating lib/rte_rib_def with a custom command 00:01:24.738 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:24.738 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:24.738 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:25.001 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:01:25.001 [288/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:25.001 [289/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.001 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.001 [291/745] Generating lib/rte_sched_def with a custom command 00:01:25.001 [292/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:25.001 [293/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:25.001 [294/745] Generating lib/rte_sched_mingw with a custom command 00:01:25.001 [295/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:25.001 [296/745] Linking static target lib/librte_latencystats.a 00:01:25.001 [297/745] Generating lib/rte_security_def with a custom command 00:01:25.001 [298/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:25.001 [299/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.001 [300/745] Generating lib/rte_security_mingw with a custom command 00:01:25.001 [301/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:25.278 [302/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:25.278 [303/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:25.278 [304/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:25.278 [305/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:25.278 [306/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:25.278 [307/745] Generating lib/rte_stack_def with a custom command 00:01:25.278 [308/745] Generating lib/rte_stack_mingw with a custom command 00:01:25.278 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:25.278 [310/745] Linking static target lib/librte_rawdev.a 00:01:25.278 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:25.278 [312/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:25.278 [313/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:25.278 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:25.278 [315/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:25.278 [316/745] Generating lib/rte_vhost_def with a custom command 00:01:25.278 [317/745] Generating lib/rte_vhost_mingw with a custom command 00:01:25.278 [318/745] Linking static target lib/librte_stack.a 00:01:25.278 [319/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:25.278 [320/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:25.278 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:25.278 [322/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:25.540 [323/745] Linking static target lib/librte_dmadev.a 00:01:25.540 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:25.540 [325/745] Linking static target lib/librte_ip_frag.a 00:01:25.540 [326/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.540 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:25.540 [328/745] Generating lib/rte_ipsec_def with a custom command 00:01:25.802 [329/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:25.802 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:25.802 [331/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:25.802 [332/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.802 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:25.802 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.068 [335/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.068 [336/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.068 [337/745] Generating lib/rte_fib_def with a custom command 00:01:26.068 [338/745] Generating lib/rte_fib_mingw with a custom command 00:01:26.068 [339/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:26.068 [340/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:26.068 [341/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:26.068 [342/745] Linking static target lib/librte_regexdev.a 00:01:26.068 [343/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:26.068 [344/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:26.068 [345/745] Linking static target lib/librte_gso.a 00:01:26.329 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.329 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:26.329 [348/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:26.329 [349/745] Linking static target lib/librte_pcapng.a 00:01:26.329 [350/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.329 [351/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:26.594 [352/745] Linking static target lib/librte_efd.a 00:01:26.594 [353/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:26.594 [354/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:26.594 [355/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:26.594 [356/745] Linking static target lib/librte_lpm.a 00:01:26.594 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:26.594 [358/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:26.594 [359/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:26.594 [360/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:26.594 [361/745] Linking static target lib/librte_reorder.a 00:01:26.853 [362/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:26.853 [363/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.853 [364/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.853 [365/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:26.853 [366/745] Generating lib/rte_port_def with a custom command 00:01:26.853 [367/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:26.853 [368/745] Generating lib/rte_port_mingw with a custom command 00:01:26.853 [369/745] Linking static target lib/acl/libavx2_tmp.a 00:01:26.853 [370/745] Generating lib/rte_pdump_mingw with a custom command 00:01:26.853 [371/745] Generating lib/rte_pdump_def with a custom command 00:01:27.115 [372/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:27.115 [373/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:27.115 [374/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:27.115 [375/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:27.115 [376/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:27.115 [377/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:27.115 [378/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:27.115 [379/745] Linking static target lib/librte_security.a 00:01:27.115 [380/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:27.115 [381/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.115 [382/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:27.115 [383/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.115 [384/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:27.115 [385/745] Linking static target lib/librte_power.a 00:01:27.115 [386/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:27.115 [387/745] Linking static target lib/librte_hash.a 00:01:27.376 [388/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.376 [389/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:27.376 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:27.376 [391/745] Linking static target lib/librte_rib.a 00:01:27.376 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:27.637 [393/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:27.637 [394/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:27.637 [395/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:27.637 [396/745] Linking static target lib/acl/libavx512_tmp.a 00:01:27.637 [397/745] Linking static target lib/librte_acl.a 00:01:27.637 [398/745] Generating lib/rte_table_def with a custom command 00:01:27.903 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:27.903 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.903 [401/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:27.903 [402/745] Linking static target lib/librte_ethdev.a 00:01:27.903 [403/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:27.903 [404/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.161 [405/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.161 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:28.161 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:28.161 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:28.161 [409/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.161 [410/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:28.161 [411/745] Generating lib/rte_pipeline_def with a custom command 00:01:28.161 [412/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:28.428 [413/745] Linking static target lib/librte_mbuf.a 00:01:28.428 [414/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:28.428 [415/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:28.428 [416/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:28.428 [417/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:28.428 [418/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:28.428 [419/745] Generating lib/rte_graph_def with a custom command 00:01:28.428 [420/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:28.428 [421/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:28.428 [422/745] Generating lib/rte_graph_mingw with a custom command 00:01:28.428 [423/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:28.428 [424/745] Linking static target lib/librte_fib.a 00:01:28.689 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:28.689 [426/745] Linking static target lib/librte_member.a 00:01:28.689 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:28.689 [428/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:28.689 [429/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.689 [430/745] Linking static target lib/librte_eventdev.a 00:01:28.689 [431/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:28.689 [432/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:28.689 [433/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:28.689 [434/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:28.689 [435/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:28.689 [436/745] Generating lib/rte_node_def with a custom command 00:01:28.953 [437/745] Generating lib/rte_node_mingw with a custom command 00:01:28.953 [438/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:28.953 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:28.953 [440/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:28.953 [441/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:28.953 [442/745] Linking static target lib/librte_sched.a 00:01:28.953 [443/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.953 [444/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:28.953 [445/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:28.953 [446/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:28.953 [447/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.215 [448/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:29.215 [449/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.215 [450/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:29.215 [451/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:29.215 [452/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:29.215 [453/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:29.215 [454/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:29.215 [455/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:29.215 [456/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:29.215 [457/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:29.215 [458/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:29.215 [459/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:29.473 [460/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:29.473 [461/745] Linking static target lib/librte_cryptodev.a 00:01:29.473 [462/745] Linking static target lib/librte_pdump.a 00:01:29.474 [463/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:29.474 [464/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:29.474 [465/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:29.474 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:29.474 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:29.474 [468/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:29.474 [469/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:29.474 [470/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:29.733 [471/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:29.733 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:29.733 [473/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.733 [474/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:29.733 [475/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:29.733 [476/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:29.733 [477/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:29.733 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:29.733 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:29.733 [480/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.996 [481/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:29.996 [482/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:29.996 [483/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:29.996 [484/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:29.996 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:29.996 [486/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:29.996 [487/745] Linking static target drivers/librte_bus_vdev.a 00:01:29.996 [488/745] Linking static target lib/librte_table.a 00:01:30.257 [489/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:30.257 [490/745] Linking static target lib/librte_ipsec.a 00:01:30.257 [491/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:30.257 [492/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:30.257 [493/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:30.257 [494/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:30.519 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.519 [496/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:30.519 [497/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:30.519 [498/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:30.782 [499/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:30.782 [500/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:30.782 [501/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:30.782 [502/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:30.782 [503/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.782 [504/745] Linking static target lib/librte_graph.a 00:01:30.782 [505/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:30.782 [506/745] Linking static target drivers/librte_bus_pci.a 00:01:30.782 [507/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:30.782 [508/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:30.782 [509/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:30.782 [510/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:30.782 [511/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:31.044 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:31.044 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:31.044 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.308 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:31.308 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.572 [517/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.572 [518/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:31.572 [519/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:31.834 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:31.834 [521/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:31.834 [522/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:31.834 [523/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:31.834 [524/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:31.834 [525/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:31.834 [526/745] Linking static target lib/librte_port.a 00:01:32.101 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.101 [528/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:32.101 [529/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:32.101 [530/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:32.101 [531/745] Linking static target drivers/librte_mempool_ring.a 00:01:32.101 [532/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:32.360 [533/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:32.360 [534/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:32.360 [535/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:32.360 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:32.360 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:32.360 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:32.623 [539/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:32.623 [540/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.623 [541/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.885 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:32.885 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:32.885 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:33.152 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:33.152 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:33.152 [547/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:33.152 [548/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:33.152 [549/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:33.152 [550/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:33.414 [551/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:33.676 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:33.676 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:33.940 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:33.940 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:33.940 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:33.940 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:33.940 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:34.200 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:34.200 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:34.458 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:34.458 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:34.458 [563/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:34.458 [564/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:34.458 [565/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:34.718 [566/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:34.718 [567/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:34.718 [568/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:34.718 [569/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:34.718 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:34.718 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:34.979 [572/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:34.979 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:34.979 [574/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.244 [575/745] Linking target lib/librte_eal.so.23.0 00:01:35.244 [576/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:35.244 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:35.244 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:35.244 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:35.244 [580/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:35.506 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:35.506 [582/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:35.506 [583/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:35.506 [584/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:35.506 [585/745] Linking target lib/librte_ring.so.23.0 00:01:35.506 [586/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:35.506 [587/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:35.506 [588/745] Linking target lib/librte_meter.so.23.0 00:01:35.506 [589/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:35.506 [590/745] Linking target lib/librte_pci.so.23.0 00:01:35.506 [591/745] Linking target lib/librte_timer.so.23.0 00:01:35.506 [592/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:35.768 [593/745] Linking target lib/librte_acl.so.23.0 00:01:35.768 [594/745] Linking target lib/librte_cfgfile.so.23.0 00:01:35.768 [595/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:35.768 [596/745] Linking target lib/librte_jobstats.so.23.0 00:01:35.768 [597/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:35.768 [598/745] Linking target lib/librte_rcu.so.23.0 00:01:35.768 [599/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:35.768 [600/745] Linking target lib/librte_mempool.so.23.0 00:01:35.768 [601/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:35.768 [602/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.768 [603/745] Linking target lib/librte_rawdev.so.23.0 00:01:35.768 [604/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:35.768 [605/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:35.768 [606/745] Linking target lib/librte_dmadev.so.23.0 00:01:36.031 [607/745] Linking target lib/librte_stack.so.23.0 00:01:36.031 [608/745] Linking target lib/librte_graph.so.23.0 00:01:36.031 [609/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:36.031 [610/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:36.031 [611/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:36.031 [612/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:36.031 [613/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:36.031 [614/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:36.031 [615/745] Linking target lib/librte_mbuf.so.23.0 00:01:36.031 [616/745] Linking target lib/librte_rib.so.23.0 00:01:36.292 [617/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:36.292 [618/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:36.292 [619/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:36.292 [620/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:36.292 [621/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:36.292 [622/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:36.292 [623/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:36.292 [624/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:36.292 [625/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:36.292 [626/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:36.292 [627/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:36.292 [628/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:36.560 [629/745] Linking target lib/librte_fib.so.23.0 00:01:36.560 [630/745] Linking target lib/librte_bbdev.so.23.0 00:01:36.560 [631/745] Linking target lib/librte_net.so.23.0 00:01:36.560 [632/745] Linking target lib/librte_distributor.so.23.0 00:01:36.560 [633/745] Linking target lib/librte_gpudev.so.23.0 00:01:36.560 [634/745] Linking target lib/librte_reorder.so.23.0 00:01:36.560 [635/745] Linking target lib/librte_regexdev.so.23.0 00:01:36.560 [636/745] Linking target lib/librte_compressdev.so.23.0 00:01:36.560 [637/745] Linking target lib/librte_cryptodev.so.23.0 00:01:36.560 [638/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:36.560 [639/745] Linking target lib/librte_sched.so.23.0 00:01:36.560 [640/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:36.560 [641/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:36.560 [642/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:36.831 [643/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:36.831 [644/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:36.831 [645/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:36.831 [646/745] Linking target lib/librte_cmdline.so.23.0 00:01:36.831 [647/745] Linking target lib/librte_hash.so.23.0 00:01:36.831 [648/745] Linking target lib/librte_security.so.23.0 00:01:36.831 [649/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:36.831 [650/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:36.831 [651/745] Linking target lib/librte_ethdev.so.23.0 00:01:36.831 [652/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:36.831 [653/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:36.831 [654/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:37.089 [655/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:37.089 [656/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:37.089 [657/745] Linking target lib/librte_metrics.so.23.0 00:01:37.089 [658/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:37.089 [659/745] Linking target lib/librte_member.so.23.0 00:01:37.089 [660/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:37.089 [661/745] Linking target lib/librte_gso.so.23.0 00:01:37.089 [662/745] Linking target lib/librte_bpf.so.23.0 00:01:37.089 [663/745] Linking target lib/librte_efd.so.23.0 00:01:37.089 [664/745] Linking target lib/librte_pcapng.so.23.0 00:01:37.089 [665/745] Linking target lib/librte_lpm.so.23.0 00:01:37.089 [666/745] Linking target lib/librte_power.so.23.0 00:01:37.089 [667/745] Linking target lib/librte_gro.so.23.0 00:01:37.089 [668/745] Linking target lib/librte_ip_frag.so.23.0 00:01:37.089 [669/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:37.089 [670/745] Linking target lib/librte_eventdev.so.23.0 00:01:37.089 [671/745] Linking target lib/librte_ipsec.so.23.0 00:01:37.089 [672/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:37.089 [673/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:37.089 [674/745] Linking target lib/librte_latencystats.so.23.0 00:01:37.347 [675/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:37.347 [676/745] Linking target lib/librte_bitratestats.so.23.0 00:01:37.347 [677/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:37.347 [678/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:37.347 [679/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:37.347 [680/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:37.347 [681/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:37.347 [682/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:37.347 [683/745] Linking target lib/librte_pdump.so.23.0 00:01:37.347 [684/745] Linking target lib/librte_port.so.23.0 00:01:37.347 [685/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:37.347 [686/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:37.605 [687/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:37.605 [688/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:37.605 [689/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:37.605 [690/745] Linking target lib/librte_table.so.23.0 00:01:37.605 [691/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:37.605 [692/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:37.605 [693/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:37.605 [694/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:38.171 [695/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:38.171 [696/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:38.171 [697/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:38.429 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:38.429 [699/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:38.686 [700/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:38.686 [701/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:38.943 [702/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:38.943 [703/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:38.943 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:39.200 [705/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:39.200 [706/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:39.200 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:39.200 [708/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:39.200 [709/745] Linking static target drivers/librte_net_i40e.a 00:01:39.458 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:39.716 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.973 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:40.539 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:40.539 [714/745] Linking static target lib/librte_node.a 00:01:40.539 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.797 [716/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:40.797 [717/745] Linking target lib/librte_node.so.23.0 00:01:41.728 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:41.728 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:49.836 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:21.937 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:21.937 [722/745] Linking static target lib/librte_vhost.a 00:02:21.937 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.937 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:31.910 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:31.910 [726/745] Linking static target lib/librte_pipeline.a 00:02:31.910 [727/745] Linking target app/dpdk-test-fib 00:02:31.910 [728/745] Linking target app/dpdk-pdump 00:02:31.910 [729/745] Linking target app/dpdk-test-cmdline 00:02:31.910 [730/745] Linking target app/dpdk-proc-info 00:02:31.910 [731/745] Linking target app/dpdk-dumpcap 00:02:31.910 [732/745] Linking target app/dpdk-test-acl 00:02:31.910 [733/745] Linking target app/dpdk-test-flow-perf 00:02:31.910 [734/745] Linking target app/dpdk-test-pipeline 00:02:31.910 [735/745] Linking target app/dpdk-test-regex 00:02:31.910 [736/745] Linking target app/dpdk-test-gpudev 00:02:31.910 [737/745] Linking target app/dpdk-test-security-perf 00:02:31.910 [738/745] Linking target app/dpdk-test-sad 00:02:31.910 [739/745] Linking target app/dpdk-test-bbdev 00:02:31.910 [740/745] Linking target app/dpdk-test-crypto-perf 00:02:31.910 [741/745] Linking target app/dpdk-test-eventdev 00:02:31.910 [742/745] Linking target app/dpdk-test-compress-perf 00:02:31.910 [743/745] Linking target app/dpdk-testpmd 00:02:33.283 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.283 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:33.283 17:38:07 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:33.283 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:33.283 [0/1] Installing files. 00:02:33.543 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:33.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:33.546 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:33.805 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:33.806 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:33.806 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:33.806 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.375 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.375 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.375 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.375 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.375 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.375 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.375 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.375 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.375 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.375 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:34.375 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.375 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:34.375 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.375 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:34.375 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.375 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:34.375 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:34.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:34.379 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:34.379 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:34.379 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:34.379 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:34.379 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:34.379 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:34.379 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:34.379 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:34.379 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:34.379 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:34.379 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:34.379 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:34.379 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:34.379 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:34.379 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:34.379 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:34.379 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:34.379 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:34.379 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:34.379 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:34.379 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:34.379 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:34.379 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:34.379 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:34.379 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:34.380 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:34.380 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:34.380 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:34.380 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:34.380 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:34.380 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:34.380 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:34.380 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:34.380 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:34.380 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:34.380 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:34.380 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:34.380 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:34.380 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:34.380 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:34.380 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:34.380 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:34.380 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:34.380 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:34.380 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:34.380 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:34.380 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:34.380 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:34.380 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:34.380 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:34.380 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:34.380 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:34.380 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:34.380 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:34.380 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:34.380 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:34.380 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:34.380 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:34.380 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:34.380 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:34.380 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:34.380 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:34.380 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:34.380 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:34.380 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:34.380 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:34.380 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:34.380 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:34.380 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:34.380 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:34.380 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:34.380 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:34.380 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:34.380 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:34.380 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:34.380 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:34.380 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:34.380 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:34.380 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:34.380 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:34.380 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:34.380 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:34.380 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:34.381 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:34.381 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:34.381 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:34.381 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:34.381 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:34.381 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:34.381 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:34.381 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:34.381 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:34.381 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:34.381 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:34.381 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:34.381 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:34.381 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:34.381 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:34.381 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:34.381 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:34.381 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:34.381 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:34.381 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:34.381 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:34.381 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:34.381 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:34.381 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:34.381 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:34.381 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:34.381 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:34.381 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:34.381 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:34.381 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:34.381 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:34.381 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:34.381 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:34.381 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:34.381 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:34.381 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:34.381 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:34.381 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:34.381 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:34.381 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:34.381 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:34.381 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:34.381 17:38:08 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:34.381 17:38:08 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:34.381 17:38:08 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:34.381 17:38:08 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.381 00:02:34.381 real 1m19.554s 00:02:34.381 user 14m19.613s 00:02:34.381 sys 1m47.091s 00:02:34.381 17:38:08 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:34.381 17:38:08 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:34.381 ************************************ 00:02:34.381 END TEST build_native_dpdk 00:02:34.381 ************************************ 00:02:34.381 17:38:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:34.381 17:38:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:34.381 17:38:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:34.381 17:38:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:34.381 17:38:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:34.381 17:38:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:34.381 17:38:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:34.381 17:38:09 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:34.381 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:34.679 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:34.679 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:34.679 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:34.936 Using 'verbs' RDMA provider 00:02:45.466 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:53.648 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:53.648 Creating mk/config.mk...done. 00:02:53.648 Creating mk/cc.flags.mk...done. 00:02:53.648 Type 'make' to build. 00:02:53.648 17:38:28 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:53.648 17:38:28 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:53.648 17:38:28 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:53.648 17:38:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.648 ************************************ 00:02:53.648 START TEST make 00:02:53.648 ************************************ 00:02:53.648 17:38:28 make -- common/autotest_common.sh@1121 -- $ make -j48 00:02:53.909 make[1]: Nothing to be done for 'all'. 00:02:55.301 The Meson build system 00:02:55.301 Version: 1.3.1 00:02:55.301 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:55.301 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:55.301 Build type: native build 00:02:55.301 Project name: libvfio-user 00:02:55.301 Project version: 0.0.1 00:02:55.301 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:55.301 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:55.301 Host machine cpu family: x86_64 00:02:55.301 Host machine cpu: x86_64 00:02:55.301 Run-time dependency threads found: YES 00:02:55.301 Library dl found: YES 00:02:55.302 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:55.302 Run-time dependency json-c found: YES 0.17 00:02:55.302 Run-time dependency cmocka found: YES 1.1.7 00:02:55.302 Program pytest-3 found: NO 00:02:55.302 Program flake8 found: NO 00:02:55.302 Program misspell-fixer found: NO 00:02:55.302 Program restructuredtext-lint found: NO 00:02:55.302 Program valgrind found: YES (/usr/bin/valgrind) 00:02:55.302 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:55.302 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:55.302 Compiler for C supports arguments -Wwrite-strings: YES 00:02:55.302 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:55.302 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:55.302 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:55.302 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:55.302 Build targets in project: 8 00:02:55.302 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:55.302 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:55.302 00:02:55.302 libvfio-user 0.0.1 00:02:55.302 00:02:55.302 User defined options 00:02:55.302 buildtype : debug 00:02:55.302 default_library: shared 00:02:55.302 libdir : /usr/local/lib 00:02:55.302 00:02:55.302 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:56.245 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:56.245 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:56.504 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:56.504 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:56.504 [4/37] Compiling C object samples/null.p/null.c.o 00:02:56.504 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:56.504 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:56.504 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:56.504 [8/37] Compiling C object samples/server.p/server.c.o 00:02:56.504 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:56.504 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:56.504 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:56.504 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:56.504 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:56.504 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:56.504 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:56.504 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:56.504 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:56.504 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:56.504 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:56.504 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:56.504 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:56.504 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:56.504 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:56.504 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:56.504 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:56.768 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:56.768 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:02:56.768 [28/37] Compiling C object samples/client.p/client.c.o 00:02:56.768 [29/37] Linking target samples/client 00:02:56.768 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:56.768 [31/37] Linking target test/unit_tests 00:02:57.030 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:57.030 [33/37] Linking target samples/gpio-pci-idio-16 00:02:57.030 [34/37] Linking target samples/lspci 00:02:57.030 [35/37] Linking target samples/server 00:02:57.030 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:57.030 [37/37] Linking target samples/null 00:02:57.030 INFO: autodetecting backend as ninja 00:02:57.030 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:57.030 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:57.597 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:57.597 ninja: no work to do. 00:03:09.794 CC lib/ut_mock/mock.o 00:03:09.794 CC lib/log/log.o 00:03:09.794 CC lib/log/log_flags.o 00:03:09.794 CC lib/log/log_deprecated.o 00:03:09.794 CC lib/ut/ut.o 00:03:09.794 LIB libspdk_log.a 00:03:09.794 LIB libspdk_ut_mock.a 00:03:09.794 LIB libspdk_ut.a 00:03:09.794 SO libspdk_ut_mock.so.6.0 00:03:09.794 SO libspdk_log.so.7.0 00:03:09.794 SO libspdk_ut.so.2.0 00:03:09.794 SYMLINK libspdk_ut_mock.so 00:03:09.794 SYMLINK libspdk_ut.so 00:03:09.794 SYMLINK libspdk_log.so 00:03:09.794 CXX lib/trace_parser/trace.o 00:03:09.794 CC lib/ioat/ioat.o 00:03:09.794 CC lib/dma/dma.o 00:03:09.794 CC lib/util/base64.o 00:03:09.794 CC lib/util/bit_array.o 00:03:09.794 CC lib/util/cpuset.o 00:03:09.794 CC lib/util/crc16.o 00:03:09.794 CC lib/util/crc32.o 00:03:09.794 CC lib/util/crc32c.o 00:03:09.794 CC lib/util/crc32_ieee.o 00:03:09.794 CC lib/util/crc64.o 00:03:09.794 CC lib/util/dif.o 00:03:09.794 CC lib/util/fd.o 00:03:09.794 CC lib/util/file.o 00:03:09.794 CC lib/util/hexlify.o 00:03:09.794 CC lib/util/iov.o 00:03:09.794 CC lib/util/math.o 00:03:09.794 CC lib/util/pipe.o 00:03:09.794 CC lib/util/strerror_tls.o 00:03:09.794 CC lib/util/string.o 00:03:09.794 CC lib/util/uuid.o 00:03:09.794 CC lib/util/fd_group.o 00:03:09.794 CC lib/util/xor.o 00:03:09.794 CC lib/util/zipf.o 00:03:09.794 CC lib/vfio_user/host/vfio_user_pci.o 00:03:09.794 CC lib/vfio_user/host/vfio_user.o 00:03:09.794 LIB libspdk_dma.a 00:03:09.794 SO libspdk_dma.so.4.0 00:03:09.794 LIB libspdk_ioat.a 00:03:09.794 SO libspdk_ioat.so.7.0 00:03:09.794 SYMLINK libspdk_dma.so 00:03:09.794 SYMLINK libspdk_ioat.so 00:03:09.794 LIB libspdk_vfio_user.a 00:03:09.794 SO libspdk_vfio_user.so.5.0 00:03:09.794 SYMLINK libspdk_vfio_user.so 00:03:10.052 LIB libspdk_util.a 00:03:10.052 SO libspdk_util.so.9.0 00:03:10.052 SYMLINK libspdk_util.so 00:03:10.310 CC lib/idxd/idxd.o 00:03:10.310 CC lib/json/json_parse.o 00:03:10.310 CC lib/rdma/common.o 00:03:10.310 CC lib/idxd/idxd_user.o 00:03:10.310 CC lib/json/json_util.o 00:03:10.310 CC lib/rdma/rdma_verbs.o 00:03:10.310 CC lib/vmd/vmd.o 00:03:10.310 CC lib/env_dpdk/env.o 00:03:10.310 CC lib/conf/conf.o 00:03:10.310 CC lib/idxd/idxd_kernel.o 00:03:10.310 CC lib/json/json_write.o 00:03:10.310 CC lib/vmd/led.o 00:03:10.310 CC lib/env_dpdk/memory.o 00:03:10.310 CC lib/env_dpdk/pci.o 00:03:10.310 CC lib/env_dpdk/init.o 00:03:10.310 CC lib/env_dpdk/threads.o 00:03:10.310 CC lib/env_dpdk/pci_ioat.o 00:03:10.310 CC lib/env_dpdk/pci_virtio.o 00:03:10.310 CC lib/env_dpdk/pci_vmd.o 00:03:10.310 CC lib/env_dpdk/pci_idxd.o 00:03:10.310 CC lib/env_dpdk/pci_event.o 00:03:10.310 CC lib/env_dpdk/sigbus_handler.o 00:03:10.310 CC lib/env_dpdk/pci_dpdk.o 00:03:10.310 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:10.310 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:10.310 LIB libspdk_trace_parser.a 00:03:10.310 SO libspdk_trace_parser.so.5.0 00:03:10.584 SYMLINK libspdk_trace_parser.so 00:03:10.585 LIB libspdk_json.a 00:03:10.585 LIB libspdk_rdma.a 00:03:10.585 LIB libspdk_conf.a 00:03:10.585 SO libspdk_json.so.6.0 00:03:10.585 SO libspdk_rdma.so.6.0 00:03:10.585 SO libspdk_conf.so.6.0 00:03:10.842 SYMLINK libspdk_conf.so 00:03:10.842 SYMLINK libspdk_json.so 00:03:10.842 SYMLINK libspdk_rdma.so 00:03:10.842 CC lib/jsonrpc/jsonrpc_server.o 00:03:10.842 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:10.842 CC lib/jsonrpc/jsonrpc_client.o 00:03:10.842 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:10.842 LIB libspdk_idxd.a 00:03:10.842 SO libspdk_idxd.so.12.0 00:03:11.099 SYMLINK libspdk_idxd.so 00:03:11.099 LIB libspdk_vmd.a 00:03:11.099 SO libspdk_vmd.so.6.0 00:03:11.099 SYMLINK libspdk_vmd.so 00:03:11.099 LIB libspdk_jsonrpc.a 00:03:11.099 SO libspdk_jsonrpc.so.6.0 00:03:11.357 SYMLINK libspdk_jsonrpc.so 00:03:11.357 CC lib/rpc/rpc.o 00:03:11.616 LIB libspdk_rpc.a 00:03:11.616 SO libspdk_rpc.so.6.0 00:03:11.616 SYMLINK libspdk_rpc.so 00:03:11.874 CC lib/notify/notify.o 00:03:11.874 CC lib/trace/trace.o 00:03:11.874 CC lib/trace/trace_flags.o 00:03:11.874 CC lib/notify/notify_rpc.o 00:03:11.874 CC lib/keyring/keyring.o 00:03:11.874 CC lib/trace/trace_rpc.o 00:03:11.874 CC lib/keyring/keyring_rpc.o 00:03:12.133 LIB libspdk_notify.a 00:03:12.133 SO libspdk_notify.so.6.0 00:03:12.133 LIB libspdk_keyring.a 00:03:12.133 SYMLINK libspdk_notify.so 00:03:12.133 LIB libspdk_trace.a 00:03:12.133 SO libspdk_keyring.so.1.0 00:03:12.133 SO libspdk_trace.so.10.0 00:03:12.133 SYMLINK libspdk_keyring.so 00:03:12.133 SYMLINK libspdk_trace.so 00:03:12.392 LIB libspdk_env_dpdk.a 00:03:12.392 SO libspdk_env_dpdk.so.14.0 00:03:12.392 CC lib/sock/sock.o 00:03:12.392 CC lib/sock/sock_rpc.o 00:03:12.392 CC lib/thread/thread.o 00:03:12.392 CC lib/thread/iobuf.o 00:03:12.392 SYMLINK libspdk_env_dpdk.so 00:03:12.650 LIB libspdk_sock.a 00:03:12.909 SO libspdk_sock.so.9.0 00:03:12.909 SYMLINK libspdk_sock.so 00:03:12.909 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:12.909 CC lib/nvme/nvme_ctrlr.o 00:03:12.909 CC lib/nvme/nvme_fabric.o 00:03:12.909 CC lib/nvme/nvme_ns_cmd.o 00:03:12.909 CC lib/nvme/nvme_ns.o 00:03:12.909 CC lib/nvme/nvme_pcie_common.o 00:03:12.909 CC lib/nvme/nvme_pcie.o 00:03:12.909 CC lib/nvme/nvme_qpair.o 00:03:12.909 CC lib/nvme/nvme.o 00:03:12.909 CC lib/nvme/nvme_quirks.o 00:03:12.909 CC lib/nvme/nvme_transport.o 00:03:12.909 CC lib/nvme/nvme_discovery.o 00:03:12.909 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:12.909 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:12.909 CC lib/nvme/nvme_tcp.o 00:03:12.909 CC lib/nvme/nvme_opal.o 00:03:12.909 CC lib/nvme/nvme_io_msg.o 00:03:12.909 CC lib/nvme/nvme_poll_group.o 00:03:12.909 CC lib/nvme/nvme_zns.o 00:03:12.909 CC lib/nvme/nvme_stubs.o 00:03:12.909 CC lib/nvme/nvme_auth.o 00:03:12.909 CC lib/nvme/nvme_cuse.o 00:03:12.909 CC lib/nvme/nvme_vfio_user.o 00:03:12.909 CC lib/nvme/nvme_rdma.o 00:03:13.867 LIB libspdk_thread.a 00:03:13.867 SO libspdk_thread.so.10.0 00:03:14.125 SYMLINK libspdk_thread.so 00:03:14.125 CC lib/blob/blobstore.o 00:03:14.125 CC lib/blob/request.o 00:03:14.125 CC lib/init/json_config.o 00:03:14.125 CC lib/vfu_tgt/tgt_endpoint.o 00:03:14.125 CC lib/virtio/virtio.o 00:03:14.125 CC lib/accel/accel.o 00:03:14.125 CC lib/blob/zeroes.o 00:03:14.126 CC lib/vfu_tgt/tgt_rpc.o 00:03:14.126 CC lib/init/subsystem.o 00:03:14.126 CC lib/virtio/virtio_vhost_user.o 00:03:14.126 CC lib/blob/blob_bs_dev.o 00:03:14.126 CC lib/accel/accel_rpc.o 00:03:14.126 CC lib/init/subsystem_rpc.o 00:03:14.126 CC lib/virtio/virtio_vfio_user.o 00:03:14.126 CC lib/accel/accel_sw.o 00:03:14.126 CC lib/init/rpc.o 00:03:14.126 CC lib/virtio/virtio_pci.o 00:03:14.384 LIB libspdk_init.a 00:03:14.642 SO libspdk_init.so.5.0 00:03:14.642 LIB libspdk_vfu_tgt.a 00:03:14.642 LIB libspdk_virtio.a 00:03:14.642 SYMLINK libspdk_init.so 00:03:14.642 SO libspdk_vfu_tgt.so.3.0 00:03:14.642 SO libspdk_virtio.so.7.0 00:03:14.642 SYMLINK libspdk_vfu_tgt.so 00:03:14.642 SYMLINK libspdk_virtio.so 00:03:14.642 CC lib/event/app.o 00:03:14.642 CC lib/event/reactor.o 00:03:14.642 CC lib/event/log_rpc.o 00:03:14.642 CC lib/event/app_rpc.o 00:03:14.642 CC lib/event/scheduler_static.o 00:03:15.208 LIB libspdk_event.a 00:03:15.208 SO libspdk_event.so.13.0 00:03:15.208 SYMLINK libspdk_event.so 00:03:15.208 LIB libspdk_accel.a 00:03:15.208 SO libspdk_accel.so.15.0 00:03:15.466 SYMLINK libspdk_accel.so 00:03:15.466 CC lib/bdev/bdev.o 00:03:15.466 CC lib/bdev/bdev_rpc.o 00:03:15.466 CC lib/bdev/bdev_zone.o 00:03:15.466 CC lib/bdev/part.o 00:03:15.466 CC lib/bdev/scsi_nvme.o 00:03:15.741 LIB libspdk_nvme.a 00:03:15.741 SO libspdk_nvme.so.13.0 00:03:16.020 SYMLINK libspdk_nvme.so 00:03:17.390 LIB libspdk_blob.a 00:03:17.390 SO libspdk_blob.so.11.0 00:03:17.390 SYMLINK libspdk_blob.so 00:03:17.390 CC lib/lvol/lvol.o 00:03:17.390 CC lib/blobfs/blobfs.o 00:03:17.390 CC lib/blobfs/tree.o 00:03:18.330 LIB libspdk_bdev.a 00:03:18.330 SO libspdk_bdev.so.15.0 00:03:18.330 SYMLINK libspdk_bdev.so 00:03:18.330 LIB libspdk_blobfs.a 00:03:18.330 SO libspdk_blobfs.so.10.0 00:03:18.330 SYMLINK libspdk_blobfs.so 00:03:18.330 LIB libspdk_lvol.a 00:03:18.330 CC lib/nbd/nbd.o 00:03:18.330 CC lib/scsi/dev.o 00:03:18.330 CC lib/nvmf/ctrlr.o 00:03:18.330 CC lib/ublk/ublk.o 00:03:18.330 CC lib/scsi/lun.o 00:03:18.330 CC lib/nbd/nbd_rpc.o 00:03:18.330 CC lib/ftl/ftl_core.o 00:03:18.330 CC lib/nvmf/ctrlr_discovery.o 00:03:18.330 CC lib/ublk/ublk_rpc.o 00:03:18.330 CC lib/scsi/port.o 00:03:18.330 CC lib/nvmf/ctrlr_bdev.o 00:03:18.330 CC lib/ftl/ftl_init.o 00:03:18.330 CC lib/scsi/scsi.o 00:03:18.330 CC lib/nvmf/subsystem.o 00:03:18.330 CC lib/scsi/scsi_bdev.o 00:03:18.330 CC lib/ftl/ftl_layout.o 00:03:18.330 CC lib/ftl/ftl_debug.o 00:03:18.330 CC lib/nvmf/nvmf.o 00:03:18.330 CC lib/scsi/scsi_pr.o 00:03:18.330 CC lib/ftl/ftl_io.o 00:03:18.330 CC lib/nvmf/nvmf_rpc.o 00:03:18.330 CC lib/scsi/scsi_rpc.o 00:03:18.330 CC lib/scsi/task.o 00:03:18.330 CC lib/nvmf/transport.o 00:03:18.330 CC lib/nvmf/tcp.o 00:03:18.330 CC lib/ftl/ftl_sb.o 00:03:18.330 CC lib/nvmf/stubs.o 00:03:18.330 CC lib/ftl/ftl_l2p.o 00:03:18.330 CC lib/nvmf/mdns_server.o 00:03:18.330 CC lib/ftl/ftl_l2p_flat.o 00:03:18.330 CC lib/nvmf/vfio_user.o 00:03:18.330 CC lib/ftl/ftl_nv_cache.o 00:03:18.330 CC lib/nvmf/rdma.o 00:03:18.330 CC lib/ftl/ftl_band_ops.o 00:03:18.330 CC lib/ftl/ftl_band.o 00:03:18.330 CC lib/nvmf/auth.o 00:03:18.330 CC lib/ftl/ftl_writer.o 00:03:18.330 CC lib/ftl/ftl_rq.o 00:03:18.330 CC lib/ftl/ftl_reloc.o 00:03:18.330 CC lib/ftl/ftl_l2p_cache.o 00:03:18.330 CC lib/ftl/ftl_p2l.o 00:03:18.330 CC lib/ftl/mngt/ftl_mngt.o 00:03:18.330 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:18.330 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:18.330 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:18.330 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:18.330 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:18.330 SO libspdk_lvol.so.10.0 00:03:18.588 SYMLINK libspdk_lvol.so 00:03:18.588 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:18.847 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:18.847 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:18.847 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:18.847 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:18.847 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:18.847 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:18.847 CC lib/ftl/utils/ftl_conf.o 00:03:18.847 CC lib/ftl/utils/ftl_md.o 00:03:18.847 CC lib/ftl/utils/ftl_mempool.o 00:03:18.847 CC lib/ftl/utils/ftl_bitmap.o 00:03:18.847 CC lib/ftl/utils/ftl_property.o 00:03:18.847 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:18.847 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:18.847 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:18.847 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:18.847 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:18.847 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:18.847 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:18.847 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:19.107 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:19.107 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:19.107 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:19.107 CC lib/ftl/base/ftl_base_dev.o 00:03:19.107 CC lib/ftl/base/ftl_base_bdev.o 00:03:19.107 CC lib/ftl/ftl_trace.o 00:03:19.107 LIB libspdk_nbd.a 00:03:19.107 SO libspdk_nbd.so.7.0 00:03:19.364 SYMLINK libspdk_nbd.so 00:03:19.364 LIB libspdk_scsi.a 00:03:19.364 SO libspdk_scsi.so.9.0 00:03:19.364 SYMLINK libspdk_scsi.so 00:03:19.364 LIB libspdk_ublk.a 00:03:19.622 SO libspdk_ublk.so.3.0 00:03:19.622 SYMLINK libspdk_ublk.so 00:03:19.622 CC lib/iscsi/conn.o 00:03:19.622 CC lib/vhost/vhost.o 00:03:19.622 CC lib/vhost/vhost_rpc.o 00:03:19.622 CC lib/iscsi/init_grp.o 00:03:19.622 CC lib/vhost/vhost_scsi.o 00:03:19.622 CC lib/iscsi/iscsi.o 00:03:19.622 CC lib/vhost/vhost_blk.o 00:03:19.622 CC lib/iscsi/md5.o 00:03:19.622 CC lib/vhost/rte_vhost_user.o 00:03:19.622 CC lib/iscsi/param.o 00:03:19.622 CC lib/iscsi/portal_grp.o 00:03:19.622 CC lib/iscsi/tgt_node.o 00:03:19.622 CC lib/iscsi/iscsi_rpc.o 00:03:19.622 CC lib/iscsi/iscsi_subsystem.o 00:03:19.622 CC lib/iscsi/task.o 00:03:19.879 LIB libspdk_ftl.a 00:03:20.135 SO libspdk_ftl.so.9.0 00:03:20.393 SYMLINK libspdk_ftl.so 00:03:20.958 LIB libspdk_vhost.a 00:03:20.958 SO libspdk_vhost.so.8.0 00:03:20.958 LIB libspdk_nvmf.a 00:03:20.958 SO libspdk_nvmf.so.18.0 00:03:20.958 SYMLINK libspdk_vhost.so 00:03:20.958 LIB libspdk_iscsi.a 00:03:21.216 SO libspdk_iscsi.so.8.0 00:03:21.216 SYMLINK libspdk_nvmf.so 00:03:21.216 SYMLINK libspdk_iscsi.so 00:03:21.473 CC module/vfu_device/vfu_virtio.o 00:03:21.473 CC module/vfu_device/vfu_virtio_blk.o 00:03:21.473 CC module/vfu_device/vfu_virtio_scsi.o 00:03:21.473 CC module/vfu_device/vfu_virtio_rpc.o 00:03:21.473 CC module/env_dpdk/env_dpdk_rpc.o 00:03:21.731 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:21.731 CC module/keyring/linux/keyring.o 00:03:21.731 CC module/accel/error/accel_error.o 00:03:21.731 CC module/sock/posix/posix.o 00:03:21.731 CC module/keyring/linux/keyring_rpc.o 00:03:21.731 CC module/accel/error/accel_error_rpc.o 00:03:21.731 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:21.731 CC module/blob/bdev/blob_bdev.o 00:03:21.731 CC module/accel/ioat/accel_ioat.o 00:03:21.731 CC module/keyring/file/keyring.o 00:03:21.731 CC module/scheduler/gscheduler/gscheduler.o 00:03:21.731 CC module/keyring/file/keyring_rpc.o 00:03:21.731 CC module/accel/ioat/accel_ioat_rpc.o 00:03:21.731 CC module/accel/dsa/accel_dsa.o 00:03:21.731 CC module/accel/iaa/accel_iaa.o 00:03:21.731 CC module/accel/dsa/accel_dsa_rpc.o 00:03:21.731 CC module/accel/iaa/accel_iaa_rpc.o 00:03:21.731 LIB libspdk_env_dpdk_rpc.a 00:03:21.731 SO libspdk_env_dpdk_rpc.so.6.0 00:03:21.731 SYMLINK libspdk_env_dpdk_rpc.so 00:03:21.731 LIB libspdk_keyring_linux.a 00:03:21.731 LIB libspdk_scheduler_dpdk_governor.a 00:03:21.731 LIB libspdk_keyring_file.a 00:03:21.731 LIB libspdk_scheduler_gscheduler.a 00:03:21.731 SO libspdk_keyring_linux.so.1.0 00:03:21.731 SO libspdk_scheduler_gscheduler.so.4.0 00:03:21.731 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:21.731 SO libspdk_keyring_file.so.1.0 00:03:21.731 LIB libspdk_accel_error.a 00:03:21.731 LIB libspdk_scheduler_dynamic.a 00:03:21.731 LIB libspdk_accel_ioat.a 00:03:21.990 LIB libspdk_accel_iaa.a 00:03:21.990 SO libspdk_accel_error.so.2.0 00:03:21.990 SO libspdk_scheduler_dynamic.so.4.0 00:03:21.990 SO libspdk_accel_ioat.so.6.0 00:03:21.990 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:21.990 SYMLINK libspdk_scheduler_gscheduler.so 00:03:21.990 SYMLINK libspdk_keyring_linux.so 00:03:21.990 SYMLINK libspdk_keyring_file.so 00:03:21.990 SO libspdk_accel_iaa.so.3.0 00:03:21.990 SYMLINK libspdk_scheduler_dynamic.so 00:03:21.990 SYMLINK libspdk_accel_error.so 00:03:21.990 LIB libspdk_accel_dsa.a 00:03:21.990 LIB libspdk_blob_bdev.a 00:03:21.990 SYMLINK libspdk_accel_ioat.so 00:03:21.990 SO libspdk_accel_dsa.so.5.0 00:03:21.990 SYMLINK libspdk_accel_iaa.so 00:03:21.990 SO libspdk_blob_bdev.so.11.0 00:03:21.990 SYMLINK libspdk_blob_bdev.so 00:03:21.990 SYMLINK libspdk_accel_dsa.so 00:03:22.249 LIB libspdk_vfu_device.a 00:03:22.249 SO libspdk_vfu_device.so.3.0 00:03:22.249 CC module/bdev/lvol/vbdev_lvol.o 00:03:22.249 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:22.249 CC module/bdev/delay/vbdev_delay.o 00:03:22.249 CC module/bdev/nvme/bdev_nvme.o 00:03:22.249 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:22.249 CC module/blobfs/bdev/blobfs_bdev.o 00:03:22.249 CC module/bdev/split/vbdev_split.o 00:03:22.249 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:22.249 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:22.249 CC module/bdev/passthru/vbdev_passthru.o 00:03:22.249 CC module/bdev/malloc/bdev_malloc.o 00:03:22.249 CC module/bdev/aio/bdev_aio.o 00:03:22.249 CC module/bdev/iscsi/bdev_iscsi.o 00:03:22.249 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:22.249 CC module/bdev/ftl/bdev_ftl.o 00:03:22.249 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:22.249 CC module/bdev/null/bdev_null.o 00:03:22.249 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:22.249 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:22.249 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:22.249 CC module/bdev/split/vbdev_split_rpc.o 00:03:22.249 CC module/bdev/nvme/nvme_rpc.o 00:03:22.249 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:22.249 CC module/bdev/null/bdev_null_rpc.o 00:03:22.249 CC module/bdev/aio/bdev_aio_rpc.o 00:03:22.249 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:22.249 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:22.249 CC module/bdev/error/vbdev_error.o 00:03:22.249 CC module/bdev/nvme/bdev_mdns_client.o 00:03:22.249 CC module/bdev/gpt/gpt.o 00:03:22.249 CC module/bdev/raid/bdev_raid.o 00:03:22.249 CC module/bdev/nvme/vbdev_opal.o 00:03:22.249 CC module/bdev/error/vbdev_error_rpc.o 00:03:22.249 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:22.249 CC module/bdev/raid/bdev_raid_rpc.o 00:03:22.249 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:22.249 CC module/bdev/gpt/vbdev_gpt.o 00:03:22.249 CC module/bdev/raid/bdev_raid_sb.o 00:03:22.249 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:22.249 CC module/bdev/raid/raid0.o 00:03:22.249 CC module/bdev/raid/raid1.o 00:03:22.249 CC module/bdev/raid/concat.o 00:03:22.508 SYMLINK libspdk_vfu_device.so 00:03:22.508 LIB libspdk_sock_posix.a 00:03:22.508 LIB libspdk_blobfs_bdev.a 00:03:22.508 SO libspdk_sock_posix.so.6.0 00:03:22.766 SO libspdk_blobfs_bdev.so.6.0 00:03:22.766 LIB libspdk_bdev_gpt.a 00:03:22.766 LIB libspdk_bdev_split.a 00:03:22.766 SYMLINK libspdk_sock_posix.so 00:03:22.766 LIB libspdk_bdev_delay.a 00:03:22.766 SYMLINK libspdk_blobfs_bdev.so 00:03:22.766 SO libspdk_bdev_gpt.so.6.0 00:03:22.766 SO libspdk_bdev_split.so.6.0 00:03:22.766 SO libspdk_bdev_delay.so.6.0 00:03:22.766 LIB libspdk_bdev_passthru.a 00:03:22.766 LIB libspdk_bdev_null.a 00:03:22.766 LIB libspdk_bdev_error.a 00:03:22.766 SYMLINK libspdk_bdev_gpt.so 00:03:22.766 LIB libspdk_bdev_ftl.a 00:03:22.766 SYMLINK libspdk_bdev_split.so 00:03:22.766 SO libspdk_bdev_passthru.so.6.0 00:03:22.766 SO libspdk_bdev_null.so.6.0 00:03:22.766 SYMLINK libspdk_bdev_delay.so 00:03:22.766 SO libspdk_bdev_error.so.6.0 00:03:22.766 SO libspdk_bdev_ftl.so.6.0 00:03:22.766 LIB libspdk_bdev_malloc.a 00:03:22.766 LIB libspdk_bdev_zone_block.a 00:03:22.766 SYMLINK libspdk_bdev_passthru.so 00:03:22.766 SYMLINK libspdk_bdev_null.so 00:03:22.766 SO libspdk_bdev_malloc.so.6.0 00:03:22.766 LIB libspdk_bdev_aio.a 00:03:22.766 SYMLINK libspdk_bdev_error.so 00:03:22.766 SO libspdk_bdev_zone_block.so.6.0 00:03:22.766 SYMLINK libspdk_bdev_ftl.so 00:03:23.025 SO libspdk_bdev_aio.so.6.0 00:03:23.025 LIB libspdk_bdev_iscsi.a 00:03:23.025 SYMLINK libspdk_bdev_malloc.so 00:03:23.025 SYMLINK libspdk_bdev_zone_block.so 00:03:23.025 SO libspdk_bdev_iscsi.so.6.0 00:03:23.025 SYMLINK libspdk_bdev_aio.so 00:03:23.025 SYMLINK libspdk_bdev_iscsi.so 00:03:23.025 LIB libspdk_bdev_virtio.a 00:03:23.025 LIB libspdk_bdev_lvol.a 00:03:23.025 SO libspdk_bdev_virtio.so.6.0 00:03:23.025 SO libspdk_bdev_lvol.so.6.0 00:03:23.025 SYMLINK libspdk_bdev_virtio.so 00:03:23.025 SYMLINK libspdk_bdev_lvol.so 00:03:23.591 LIB libspdk_bdev_raid.a 00:03:23.591 SO libspdk_bdev_raid.so.6.0 00:03:23.591 SYMLINK libspdk_bdev_raid.so 00:03:24.525 LIB libspdk_bdev_nvme.a 00:03:24.525 SO libspdk_bdev_nvme.so.7.0 00:03:24.783 SYMLINK libspdk_bdev_nvme.so 00:03:25.041 CC module/event/subsystems/iobuf/iobuf.o 00:03:25.041 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:25.041 CC module/event/subsystems/scheduler/scheduler.o 00:03:25.041 CC module/event/subsystems/keyring/keyring.o 00:03:25.041 CC module/event/subsystems/vmd/vmd.o 00:03:25.041 CC module/event/subsystems/sock/sock.o 00:03:25.041 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:25.041 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:25.041 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:25.300 LIB libspdk_event_keyring.a 00:03:25.300 LIB libspdk_event_sock.a 00:03:25.300 LIB libspdk_event_vhost_blk.a 00:03:25.300 LIB libspdk_event_vfu_tgt.a 00:03:25.300 LIB libspdk_event_scheduler.a 00:03:25.300 LIB libspdk_event_vmd.a 00:03:25.300 SO libspdk_event_keyring.so.1.0 00:03:25.300 SO libspdk_event_vhost_blk.so.3.0 00:03:25.300 SO libspdk_event_sock.so.5.0 00:03:25.300 SO libspdk_event_vfu_tgt.so.3.0 00:03:25.300 SO libspdk_event_scheduler.so.4.0 00:03:25.300 SO libspdk_event_vmd.so.6.0 00:03:25.300 LIB libspdk_event_iobuf.a 00:03:25.300 SO libspdk_event_iobuf.so.3.0 00:03:25.300 SYMLINK libspdk_event_keyring.so 00:03:25.300 SYMLINK libspdk_event_sock.so 00:03:25.300 SYMLINK libspdk_event_vhost_blk.so 00:03:25.300 SYMLINK libspdk_event_vfu_tgt.so 00:03:25.300 SYMLINK libspdk_event_scheduler.so 00:03:25.300 SYMLINK libspdk_event_vmd.so 00:03:25.300 SYMLINK libspdk_event_iobuf.so 00:03:25.559 CC module/event/subsystems/accel/accel.o 00:03:25.559 LIB libspdk_event_accel.a 00:03:25.818 SO libspdk_event_accel.so.6.0 00:03:25.818 SYMLINK libspdk_event_accel.so 00:03:25.818 CC module/event/subsystems/bdev/bdev.o 00:03:26.077 LIB libspdk_event_bdev.a 00:03:26.077 SO libspdk_event_bdev.so.6.0 00:03:26.077 SYMLINK libspdk_event_bdev.so 00:03:26.336 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:26.336 CC module/event/subsystems/scsi/scsi.o 00:03:26.336 CC module/event/subsystems/ublk/ublk.o 00:03:26.336 CC module/event/subsystems/nbd/nbd.o 00:03:26.336 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:26.595 LIB libspdk_event_nbd.a 00:03:26.595 LIB libspdk_event_ublk.a 00:03:26.595 LIB libspdk_event_scsi.a 00:03:26.595 SO libspdk_event_nbd.so.6.0 00:03:26.595 SO libspdk_event_ublk.so.3.0 00:03:26.595 SO libspdk_event_scsi.so.6.0 00:03:26.595 SYMLINK libspdk_event_ublk.so 00:03:26.595 SYMLINK libspdk_event_nbd.so 00:03:26.595 SYMLINK libspdk_event_scsi.so 00:03:26.595 LIB libspdk_event_nvmf.a 00:03:26.595 SO libspdk_event_nvmf.so.6.0 00:03:26.595 SYMLINK libspdk_event_nvmf.so 00:03:26.853 CC module/event/subsystems/iscsi/iscsi.o 00:03:26.853 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:26.853 LIB libspdk_event_vhost_scsi.a 00:03:26.853 LIB libspdk_event_iscsi.a 00:03:26.853 SO libspdk_event_vhost_scsi.so.3.0 00:03:26.853 SO libspdk_event_iscsi.so.6.0 00:03:26.853 SYMLINK libspdk_event_vhost_scsi.so 00:03:27.113 SYMLINK libspdk_event_iscsi.so 00:03:27.113 SO libspdk.so.6.0 00:03:27.113 SYMLINK libspdk.so 00:03:27.379 CC app/trace_record/trace_record.o 00:03:27.379 CC app/spdk_top/spdk_top.o 00:03:27.379 CXX app/trace/trace.o 00:03:27.379 TEST_HEADER include/spdk/accel.h 00:03:27.379 CC app/spdk_nvme_perf/perf.o 00:03:27.379 TEST_HEADER include/spdk/accel_module.h 00:03:27.379 CC app/spdk_nvme_discover/discovery_aer.o 00:03:27.379 CC app/spdk_lspci/spdk_lspci.o 00:03:27.379 CC test/rpc_client/rpc_client_test.o 00:03:27.379 TEST_HEADER include/spdk/assert.h 00:03:27.379 TEST_HEADER include/spdk/barrier.h 00:03:27.379 CC app/spdk_nvme_identify/identify.o 00:03:27.379 TEST_HEADER include/spdk/base64.h 00:03:27.379 TEST_HEADER include/spdk/bdev.h 00:03:27.379 TEST_HEADER include/spdk/bdev_module.h 00:03:27.379 TEST_HEADER include/spdk/bdev_zone.h 00:03:27.379 TEST_HEADER include/spdk/bit_array.h 00:03:27.379 TEST_HEADER include/spdk/bit_pool.h 00:03:27.379 TEST_HEADER include/spdk/blob_bdev.h 00:03:27.379 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:27.379 TEST_HEADER include/spdk/blobfs.h 00:03:27.379 TEST_HEADER include/spdk/blob.h 00:03:27.379 TEST_HEADER include/spdk/conf.h 00:03:27.379 TEST_HEADER include/spdk/config.h 00:03:27.379 TEST_HEADER include/spdk/cpuset.h 00:03:27.379 TEST_HEADER include/spdk/crc16.h 00:03:27.379 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:27.379 TEST_HEADER include/spdk/crc32.h 00:03:27.379 TEST_HEADER include/spdk/crc64.h 00:03:27.379 TEST_HEADER include/spdk/dif.h 00:03:27.379 CC app/spdk_dd/spdk_dd.o 00:03:27.379 TEST_HEADER include/spdk/dma.h 00:03:27.379 TEST_HEADER include/spdk/endian.h 00:03:27.379 TEST_HEADER include/spdk/env_dpdk.h 00:03:27.379 TEST_HEADER include/spdk/env.h 00:03:27.379 TEST_HEADER include/spdk/event.h 00:03:27.379 CC app/nvmf_tgt/nvmf_main.o 00:03:27.379 CC app/iscsi_tgt/iscsi_tgt.o 00:03:27.379 TEST_HEADER include/spdk/fd_group.h 00:03:27.379 TEST_HEADER include/spdk/fd.h 00:03:27.379 TEST_HEADER include/spdk/file.h 00:03:27.379 TEST_HEADER include/spdk/ftl.h 00:03:27.379 TEST_HEADER include/spdk/gpt_spec.h 00:03:27.379 CC app/vhost/vhost.o 00:03:27.379 TEST_HEADER include/spdk/hexlify.h 00:03:27.379 TEST_HEADER include/spdk/histogram_data.h 00:03:27.379 TEST_HEADER include/spdk/idxd.h 00:03:27.379 TEST_HEADER include/spdk/idxd_spec.h 00:03:27.379 TEST_HEADER include/spdk/init.h 00:03:27.379 TEST_HEADER include/spdk/ioat.h 00:03:27.379 CC examples/ioat/perf/perf.o 00:03:27.379 TEST_HEADER include/spdk/ioat_spec.h 00:03:27.379 CC app/spdk_tgt/spdk_tgt.o 00:03:27.379 TEST_HEADER include/spdk/iscsi_spec.h 00:03:27.379 CC examples/sock/hello_world/hello_sock.o 00:03:27.379 CC test/event/event_perf/event_perf.o 00:03:27.379 CC test/event/reactor/reactor.o 00:03:27.379 CC examples/ioat/verify/verify.o 00:03:27.379 TEST_HEADER include/spdk/json.h 00:03:27.379 CC examples/nvme/reconnect/reconnect.o 00:03:27.379 CC examples/vmd/led/led.o 00:03:27.379 CC test/app/histogram_perf/histogram_perf.o 00:03:27.379 CC examples/accel/perf/accel_perf.o 00:03:27.379 TEST_HEADER include/spdk/jsonrpc.h 00:03:27.379 CC examples/util/zipf/zipf.o 00:03:27.379 CC test/thread/poller_perf/poller_perf.o 00:03:27.379 TEST_HEADER include/spdk/keyring.h 00:03:27.379 TEST_HEADER include/spdk/keyring_module.h 00:03:27.379 TEST_HEADER include/spdk/likely.h 00:03:27.379 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:27.379 CC test/nvme/aer/aer.o 00:03:27.379 TEST_HEADER include/spdk/log.h 00:03:27.379 CC examples/vmd/lsvmd/lsvmd.o 00:03:27.379 TEST_HEADER include/spdk/lvol.h 00:03:27.379 CC test/event/reactor_perf/reactor_perf.o 00:03:27.379 CC examples/nvme/hello_world/hello_world.o 00:03:27.379 CC examples/idxd/perf/perf.o 00:03:27.379 TEST_HEADER include/spdk/memory.h 00:03:27.379 CC app/fio/nvme/fio_plugin.o 00:03:27.379 TEST_HEADER include/spdk/mmio.h 00:03:27.379 TEST_HEADER include/spdk/nbd.h 00:03:27.379 TEST_HEADER include/spdk/notify.h 00:03:27.641 TEST_HEADER include/spdk/nvme.h 00:03:27.641 TEST_HEADER include/spdk/nvme_intel.h 00:03:27.641 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:27.641 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:27.641 CC examples/blob/cli/blobcli.o 00:03:27.641 TEST_HEADER include/spdk/nvme_spec.h 00:03:27.641 TEST_HEADER include/spdk/nvme_zns.h 00:03:27.641 CC examples/blob/hello_world/hello_blob.o 00:03:27.641 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:27.641 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:27.641 CC test/dma/test_dma/test_dma.o 00:03:27.641 CC test/accel/dif/dif.o 00:03:27.641 CC examples/nvmf/nvmf/nvmf.o 00:03:27.641 CC examples/thread/thread/thread_ex.o 00:03:27.641 CC examples/bdev/hello_world/hello_bdev.o 00:03:27.641 TEST_HEADER include/spdk/nvmf.h 00:03:27.641 TEST_HEADER include/spdk/nvmf_spec.h 00:03:27.641 CC examples/bdev/bdevperf/bdevperf.o 00:03:27.641 TEST_HEADER include/spdk/nvmf_transport.h 00:03:27.641 CC test/event/app_repeat/app_repeat.o 00:03:27.641 TEST_HEADER include/spdk/opal.h 00:03:27.641 CC test/app/bdev_svc/bdev_svc.o 00:03:27.641 CC test/bdev/bdevio/bdevio.o 00:03:27.641 TEST_HEADER include/spdk/opal_spec.h 00:03:27.641 TEST_HEADER include/spdk/pci_ids.h 00:03:27.641 TEST_HEADER include/spdk/pipe.h 00:03:27.641 TEST_HEADER include/spdk/queue.h 00:03:27.641 CC test/blobfs/mkfs/mkfs.o 00:03:27.642 TEST_HEADER include/spdk/reduce.h 00:03:27.642 TEST_HEADER include/spdk/rpc.h 00:03:27.642 TEST_HEADER include/spdk/scheduler.h 00:03:27.642 TEST_HEADER include/spdk/scsi.h 00:03:27.642 TEST_HEADER include/spdk/scsi_spec.h 00:03:27.642 TEST_HEADER include/spdk/sock.h 00:03:27.642 TEST_HEADER include/spdk/stdinc.h 00:03:27.642 TEST_HEADER include/spdk/string.h 00:03:27.642 TEST_HEADER include/spdk/thread.h 00:03:27.642 TEST_HEADER include/spdk/trace.h 00:03:27.642 TEST_HEADER include/spdk/trace_parser.h 00:03:27.642 TEST_HEADER include/spdk/tree.h 00:03:27.642 LINK spdk_lspci 00:03:27.642 TEST_HEADER include/spdk/ublk.h 00:03:27.642 TEST_HEADER include/spdk/util.h 00:03:27.642 TEST_HEADER include/spdk/uuid.h 00:03:27.642 TEST_HEADER include/spdk/version.h 00:03:27.642 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:27.642 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:27.642 TEST_HEADER include/spdk/vhost.h 00:03:27.642 TEST_HEADER include/spdk/vmd.h 00:03:27.642 TEST_HEADER include/spdk/xor.h 00:03:27.642 TEST_HEADER include/spdk/zipf.h 00:03:27.642 CC test/env/mem_callbacks/mem_callbacks.o 00:03:27.642 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:27.642 CXX test/cpp_headers/accel.o 00:03:27.642 CC test/lvol/esnap/esnap.o 00:03:27.642 LINK rpc_client_test 00:03:27.642 LINK spdk_nvme_discover 00:03:27.642 LINK interrupt_tgt 00:03:27.904 LINK reactor 00:03:27.904 LINK event_perf 00:03:27.904 LINK led 00:03:27.904 LINK histogram_perf 00:03:27.904 LINK lsvmd 00:03:27.904 LINK reactor_perf 00:03:27.904 LINK nvmf_tgt 00:03:27.904 LINK poller_perf 00:03:27.904 LINK spdk_trace_record 00:03:27.904 LINK zipf 00:03:27.904 LINK vhost 00:03:27.904 LINK iscsi_tgt 00:03:27.904 LINK app_repeat 00:03:27.904 LINK spdk_tgt 00:03:27.904 LINK ioat_perf 00:03:27.904 LINK verify 00:03:27.904 LINK hello_sock 00:03:27.904 LINK hello_world 00:03:27.904 LINK bdev_svc 00:03:27.904 LINK mkfs 00:03:27.904 LINK hello_bdev 00:03:28.170 LINK hello_blob 00:03:28.170 LINK thread 00:03:28.170 CXX test/cpp_headers/accel_module.o 00:03:28.170 LINK aer 00:03:28.170 LINK mem_callbacks 00:03:28.170 CC examples/nvme/arbitration/arbitration.o 00:03:28.170 CXX test/cpp_headers/assert.o 00:03:28.170 LINK spdk_dd 00:03:28.170 LINK nvmf 00:03:28.170 LINK spdk_trace 00:03:28.170 LINK reconnect 00:03:28.170 LINK idxd_perf 00:03:28.170 CC test/app/jsoncat/jsoncat.o 00:03:28.170 CC test/nvme/reset/reset.o 00:03:28.170 CC test/nvme/sgl/sgl.o 00:03:28.170 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:28.170 CXX test/cpp_headers/barrier.o 00:03:28.170 CC test/app/stub/stub.o 00:03:28.170 CC test/env/vtophys/vtophys.o 00:03:28.170 CXX test/cpp_headers/base64.o 00:03:28.170 CC examples/nvme/hotplug/hotplug.o 00:03:28.170 LINK test_dma 00:03:28.170 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:28.442 CC examples/nvme/abort/abort.o 00:03:28.442 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:28.442 LINK bdevio 00:03:28.442 CXX test/cpp_headers/bdev.o 00:03:28.442 CC test/env/memory/memory_ut.o 00:03:28.442 CC app/fio/bdev/fio_plugin.o 00:03:28.442 LINK accel_perf 00:03:28.442 CXX test/cpp_headers/bdev_module.o 00:03:28.442 LINK dif 00:03:28.442 CC test/env/pci/pci_ut.o 00:03:28.442 CC test/nvme/e2edp/nvme_dp.o 00:03:28.442 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:28.442 CXX test/cpp_headers/bdev_zone.o 00:03:28.442 CXX test/cpp_headers/bit_array.o 00:03:28.442 LINK nvme_manage 00:03:28.442 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:28.442 CC test/nvme/overhead/overhead.o 00:03:28.442 LINK blobcli 00:03:28.442 LINK nvme_fuzz 00:03:28.442 CC test/nvme/err_injection/err_injection.o 00:03:28.442 CC test/event/scheduler/scheduler.o 00:03:28.442 CXX test/cpp_headers/bit_pool.o 00:03:28.442 LINK jsoncat 00:03:28.442 CXX test/cpp_headers/blob_bdev.o 00:03:28.442 CC test/nvme/startup/startup.o 00:03:28.442 LINK spdk_nvme 00:03:28.442 CXX test/cpp_headers/blobfs_bdev.o 00:03:28.704 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:28.704 CXX test/cpp_headers/blobfs.o 00:03:28.704 LINK vtophys 00:03:28.704 CC test/nvme/reserve/reserve.o 00:03:28.704 CXX test/cpp_headers/blob.o 00:03:28.704 LINK stub 00:03:28.704 LINK cmb_copy 00:03:28.704 CC test/nvme/simple_copy/simple_copy.o 00:03:28.704 CC test/nvme/connect_stress/connect_stress.o 00:03:28.704 LINK env_dpdk_post_init 00:03:28.704 CC test/nvme/boot_partition/boot_partition.o 00:03:28.704 CXX test/cpp_headers/conf.o 00:03:28.704 LINK reset 00:03:28.704 CXX test/cpp_headers/config.o 00:03:28.704 LINK sgl 00:03:28.704 CXX test/cpp_headers/cpuset.o 00:03:28.704 CXX test/cpp_headers/crc16.o 00:03:28.704 LINK hotplug 00:03:28.704 CXX test/cpp_headers/crc32.o 00:03:28.704 LINK arbitration 00:03:28.704 CXX test/cpp_headers/crc64.o 00:03:28.704 CXX test/cpp_headers/dif.o 00:03:28.965 CXX test/cpp_headers/dma.o 00:03:28.965 CXX test/cpp_headers/endian.o 00:03:28.965 LINK err_injection 00:03:28.965 CC test/nvme/compliance/nvme_compliance.o 00:03:28.965 CXX test/cpp_headers/env_dpdk.o 00:03:28.965 CXX test/cpp_headers/env.o 00:03:28.965 LINK spdk_nvme_perf 00:03:28.965 CC test/nvme/fused_ordering/fused_ordering.o 00:03:28.965 CC test/nvme/fdp/fdp.o 00:03:28.965 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:28.965 CXX test/cpp_headers/event.o 00:03:28.966 CC test/nvme/cuse/cuse.o 00:03:28.966 LINK nvme_dp 00:03:28.966 CXX test/cpp_headers/fd_group.o 00:03:28.966 LINK scheduler 00:03:28.966 LINK startup 00:03:28.966 CXX test/cpp_headers/fd.o 00:03:28.966 CXX test/cpp_headers/file.o 00:03:28.966 CXX test/cpp_headers/ftl.o 00:03:28.966 CXX test/cpp_headers/gpt_spec.o 00:03:28.966 LINK pmr_persistence 00:03:28.966 CXX test/cpp_headers/hexlify.o 00:03:28.966 LINK spdk_nvme_identify 00:03:28.966 LINK reserve 00:03:28.966 LINK spdk_top 00:03:28.966 CXX test/cpp_headers/histogram_data.o 00:03:28.966 LINK bdevperf 00:03:28.966 LINK overhead 00:03:28.966 LINK abort 00:03:28.966 LINK connect_stress 00:03:29.233 CXX test/cpp_headers/idxd.o 00:03:29.233 CXX test/cpp_headers/idxd_spec.o 00:03:29.233 LINK boot_partition 00:03:29.233 CXX test/cpp_headers/init.o 00:03:29.233 LINK pci_ut 00:03:29.233 CXX test/cpp_headers/ioat.o 00:03:29.233 CXX test/cpp_headers/ioat_spec.o 00:03:29.233 CXX test/cpp_headers/iscsi_spec.o 00:03:29.233 LINK simple_copy 00:03:29.233 CXX test/cpp_headers/json.o 00:03:29.233 CXX test/cpp_headers/jsonrpc.o 00:03:29.233 CXX test/cpp_headers/keyring.o 00:03:29.233 CXX test/cpp_headers/keyring_module.o 00:03:29.233 CXX test/cpp_headers/likely.o 00:03:29.233 LINK vhost_fuzz 00:03:29.233 CXX test/cpp_headers/log.o 00:03:29.233 CXX test/cpp_headers/lvol.o 00:03:29.233 CXX test/cpp_headers/memory.o 00:03:29.233 CXX test/cpp_headers/mmio.o 00:03:29.233 CXX test/cpp_headers/nbd.o 00:03:29.233 CXX test/cpp_headers/notify.o 00:03:29.233 CXX test/cpp_headers/nvme.o 00:03:29.233 CXX test/cpp_headers/nvme_intel.o 00:03:29.233 CXX test/cpp_headers/nvme_ocssd.o 00:03:29.233 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:29.233 CXX test/cpp_headers/nvme_spec.o 00:03:29.233 CXX test/cpp_headers/nvme_zns.o 00:03:29.233 LINK fused_ordering 00:03:29.233 LINK doorbell_aers 00:03:29.233 CXX test/cpp_headers/nvmf_cmd.o 00:03:29.233 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:29.233 CXX test/cpp_headers/nvmf.o 00:03:29.233 CXX test/cpp_headers/nvmf_spec.o 00:03:29.233 CXX test/cpp_headers/nvmf_transport.o 00:03:29.491 CXX test/cpp_headers/opal.o 00:03:29.492 LINK spdk_bdev 00:03:29.492 CXX test/cpp_headers/pci_ids.o 00:03:29.492 CXX test/cpp_headers/opal_spec.o 00:03:29.492 CXX test/cpp_headers/pipe.o 00:03:29.492 CXX test/cpp_headers/queue.o 00:03:29.492 CXX test/cpp_headers/reduce.o 00:03:29.492 CXX test/cpp_headers/rpc.o 00:03:29.492 CXX test/cpp_headers/scheduler.o 00:03:29.492 CXX test/cpp_headers/scsi.o 00:03:29.492 CXX test/cpp_headers/scsi_spec.o 00:03:29.492 CXX test/cpp_headers/sock.o 00:03:29.492 CXX test/cpp_headers/stdinc.o 00:03:29.492 CXX test/cpp_headers/string.o 00:03:29.492 CXX test/cpp_headers/thread.o 00:03:29.492 CXX test/cpp_headers/trace.o 00:03:29.492 CXX test/cpp_headers/trace_parser.o 00:03:29.492 CXX test/cpp_headers/tree.o 00:03:29.492 LINK nvme_compliance 00:03:29.492 CXX test/cpp_headers/ublk.o 00:03:29.492 CXX test/cpp_headers/util.o 00:03:29.492 CXX test/cpp_headers/uuid.o 00:03:29.492 CXX test/cpp_headers/version.o 00:03:29.492 CXX test/cpp_headers/vfio_user_pci.o 00:03:29.492 CXX test/cpp_headers/vfio_user_spec.o 00:03:29.492 CXX test/cpp_headers/vhost.o 00:03:29.492 CXX test/cpp_headers/vmd.o 00:03:29.492 LINK fdp 00:03:29.492 CXX test/cpp_headers/xor.o 00:03:29.492 CXX test/cpp_headers/zipf.o 00:03:29.751 LINK memory_ut 00:03:30.683 LINK iscsi_fuzz 00:03:30.683 LINK cuse 00:03:33.963 LINK esnap 00:03:33.963 00:03:33.963 real 0m40.383s 00:03:33.963 user 7m32.477s 00:03:33.963 sys 1m48.790s 00:03:33.963 17:39:08 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:33.963 17:39:08 make -- common/autotest_common.sh@10 -- $ set +x 00:03:33.963 ************************************ 00:03:33.963 END TEST make 00:03:33.963 ************************************ 00:03:33.963 17:39:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:33.963 17:39:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:33.963 17:39:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:33.963 17:39:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.963 17:39:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:33.963 17:39:08 -- pm/common@44 -- $ pid=709334 00:03:33.963 17:39:08 -- pm/common@50 -- $ kill -TERM 709334 00:03:33.963 17:39:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.963 17:39:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:33.963 17:39:08 -- pm/common@44 -- $ pid=709336 00:03:33.963 17:39:08 -- pm/common@50 -- $ kill -TERM 709336 00:03:33.963 17:39:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.963 17:39:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:33.963 17:39:08 -- pm/common@44 -- $ pid=709338 00:03:33.963 17:39:08 -- pm/common@50 -- $ kill -TERM 709338 00:03:33.963 17:39:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.963 17:39:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:33.963 17:39:08 -- pm/common@44 -- $ pid=709366 00:03:33.963 17:39:08 -- pm/common@50 -- $ sudo -E kill -TERM 709366 00:03:34.223 17:39:08 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:34.223 17:39:08 -- nvmf/common.sh@7 -- # uname -s 00:03:34.223 17:39:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:34.223 17:39:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:34.223 17:39:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:34.223 17:39:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:34.223 17:39:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:34.223 17:39:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:34.223 17:39:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:34.223 17:39:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:34.223 17:39:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:34.223 17:39:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:34.223 17:39:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:34.223 17:39:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:34.223 17:39:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:34.223 17:39:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:34.223 17:39:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:34.223 17:39:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:34.223 17:39:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:34.223 17:39:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:34.223 17:39:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:34.223 17:39:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:34.223 17:39:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.223 17:39:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.223 17:39:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.223 17:39:08 -- paths/export.sh@5 -- # export PATH 00:03:34.223 17:39:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.223 17:39:08 -- nvmf/common.sh@47 -- # : 0 00:03:34.223 17:39:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:34.223 17:39:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:34.223 17:39:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:34.223 17:39:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:34.223 17:39:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:34.223 17:39:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:34.223 17:39:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:34.223 17:39:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:34.223 17:39:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:34.223 17:39:08 -- spdk/autotest.sh@32 -- # uname -s 00:03:34.223 17:39:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:34.223 17:39:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:34.223 17:39:08 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:34.223 17:39:08 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:34.223 17:39:08 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:34.223 17:39:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:34.223 17:39:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:34.223 17:39:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:34.223 17:39:08 -- spdk/autotest.sh@48 -- # udevadm_pid=784467 00:03:34.223 17:39:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:34.223 17:39:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:34.223 17:39:08 -- pm/common@17 -- # local monitor 00:03:34.223 17:39:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.223 17:39:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.223 17:39:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.223 17:39:08 -- pm/common@21 -- # date +%s 00:03:34.223 17:39:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.223 17:39:08 -- pm/common@21 -- # date +%s 00:03:34.223 17:39:08 -- pm/common@25 -- # sleep 1 00:03:34.223 17:39:08 -- pm/common@21 -- # date +%s 00:03:34.223 17:39:08 -- pm/common@21 -- # date +%s 00:03:34.223 17:39:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721489948 00:03:34.224 17:39:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721489948 00:03:34.224 17:39:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721489948 00:03:34.224 17:39:08 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721489948 00:03:34.224 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721489948_collect-vmstat.pm.log 00:03:34.224 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721489948_collect-cpu-load.pm.log 00:03:34.224 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721489948_collect-cpu-temp.pm.log 00:03:34.224 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721489948_collect-bmc-pm.bmc.pm.log 00:03:35.159 17:39:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:35.159 17:39:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:35.159 17:39:09 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:35.159 17:39:09 -- common/autotest_common.sh@10 -- # set +x 00:03:35.159 17:39:09 -- spdk/autotest.sh@59 -- # create_test_list 00:03:35.159 17:39:09 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:35.159 17:39:09 -- common/autotest_common.sh@10 -- # set +x 00:03:35.159 17:39:09 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:35.159 17:39:09 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:35.159 17:39:09 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:35.159 17:39:09 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:35.159 17:39:09 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:35.159 17:39:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:35.159 17:39:09 -- common/autotest_common.sh@1451 -- # uname 00:03:35.159 17:39:09 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:35.159 17:39:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:35.159 17:39:09 -- common/autotest_common.sh@1471 -- # uname 00:03:35.159 17:39:09 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:35.159 17:39:09 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:35.159 17:39:09 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:35.159 17:39:09 -- spdk/autotest.sh@72 -- # hash lcov 00:03:35.159 17:39:09 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:35.159 17:39:09 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:35.159 --rc lcov_branch_coverage=1 00:03:35.159 --rc lcov_function_coverage=1 00:03:35.159 --rc genhtml_branch_coverage=1 00:03:35.159 --rc genhtml_function_coverage=1 00:03:35.159 --rc genhtml_legend=1 00:03:35.159 --rc geninfo_all_blocks=1 00:03:35.159 ' 00:03:35.159 17:39:09 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:35.159 --rc lcov_branch_coverage=1 00:03:35.159 --rc lcov_function_coverage=1 00:03:35.159 --rc genhtml_branch_coverage=1 00:03:35.159 --rc genhtml_function_coverage=1 00:03:35.159 --rc genhtml_legend=1 00:03:35.159 --rc geninfo_all_blocks=1 00:03:35.159 ' 00:03:35.159 17:39:09 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:35.159 --rc lcov_branch_coverage=1 00:03:35.159 --rc lcov_function_coverage=1 00:03:35.159 --rc genhtml_branch_coverage=1 00:03:35.159 --rc genhtml_function_coverage=1 00:03:35.159 --rc genhtml_legend=1 00:03:35.159 --rc geninfo_all_blocks=1 00:03:35.159 --no-external' 00:03:35.159 17:39:09 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:35.159 --rc lcov_branch_coverage=1 00:03:35.159 --rc lcov_function_coverage=1 00:03:35.159 --rc genhtml_branch_coverage=1 00:03:35.159 --rc genhtml_function_coverage=1 00:03:35.159 --rc genhtml_legend=1 00:03:35.159 --rc geninfo_all_blocks=1 00:03:35.159 --no-external' 00:03:35.159 17:39:09 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:35.418 lcov: LCOV version 1.14 00:03:35.418 17:39:09 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:50.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:50.280 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:05.142 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:05.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:05.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:05.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:05.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:05.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:07.732 17:39:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:07.732 17:39:42 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:07.732 17:39:42 -- common/autotest_common.sh@10 -- # set +x 00:04:07.732 17:39:42 -- spdk/autotest.sh@91 -- # rm -f 00:04:07.732 17:39:42 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.103 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:09.103 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:09.103 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:09.103 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:09.103 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:09.103 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:09.103 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:09.103 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:09.103 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:09.103 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:09.103 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:09.103 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:09.103 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:09.103 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:09.103 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:09.103 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:09.103 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:09.103 17:39:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:09.103 17:39:43 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:09.103 17:39:43 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:09.103 17:39:43 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:09.103 17:39:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:09.103 17:39:43 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:09.103 17:39:43 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:09.103 17:39:43 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:09.103 17:39:43 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:09.103 17:39:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:09.103 17:39:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:09.103 17:39:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:09.103 17:39:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:09.103 17:39:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:09.103 17:39:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:09.103 No valid GPT data, bailing 00:04:09.360 17:39:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:09.360 17:39:43 -- scripts/common.sh@391 -- # pt= 00:04:09.360 17:39:43 -- scripts/common.sh@392 -- # return 1 00:04:09.360 17:39:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:09.360 1+0 records in 00:04:09.360 1+0 records out 00:04:09.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00211872 s, 495 MB/s 00:04:09.360 17:39:43 -- spdk/autotest.sh@118 -- # sync 00:04:09.360 17:39:43 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:09.360 17:39:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:09.360 17:39:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:11.256 17:39:45 -- spdk/autotest.sh@124 -- # uname -s 00:04:11.256 17:39:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:11.256 17:39:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:11.256 17:39:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:11.256 17:39:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:11.256 17:39:45 -- common/autotest_common.sh@10 -- # set +x 00:04:11.256 ************************************ 00:04:11.256 START TEST setup.sh 00:04:11.256 ************************************ 00:04:11.256 17:39:45 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:11.256 * Looking for test storage... 00:04:11.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:11.256 17:39:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:11.256 17:39:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:11.256 17:39:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:11.256 17:39:45 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:11.256 17:39:45 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:11.256 17:39:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:11.256 ************************************ 00:04:11.256 START TEST acl 00:04:11.256 ************************************ 00:04:11.256 17:39:45 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:11.256 * Looking for test storage... 00:04:11.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:11.256 17:39:45 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:11.256 17:39:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:11.256 17:39:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:11.256 17:39:45 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:11.256 17:39:45 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:11.256 17:39:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:11.256 17:39:45 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:11.256 17:39:45 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:11.256 17:39:45 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:11.256 17:39:45 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:11.256 17:39:45 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:11.256 17:39:45 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:11.256 17:39:45 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:11.256 17:39:45 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:11.256 17:39:45 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.256 17:39:45 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.629 17:39:47 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:12.629 17:39:47 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:12.629 17:39:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.629 17:39:47 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:12.629 17:39:47 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.629 17:39:47 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:13.562 Hugepages 00:04:13.562 node hugesize free / total 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 00:04:13.562 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.562 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.563 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:13.563 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.563 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.563 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.563 17:39:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:13.563 17:39:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:13.563 17:39:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:13.563 17:39:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:13.563 17:39:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:13.563 17:39:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.563 17:39:48 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:13.563 17:39:48 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:13.563 17:39:48 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:13.563 17:39:48 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:13.563 17:39:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:13.819 ************************************ 00:04:13.819 START TEST denied 00:04:13.819 ************************************ 00:04:13.819 17:39:48 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:13.819 17:39:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:13.819 17:39:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:13.819 17:39:48 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:13.819 17:39:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.819 17:39:48 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.189 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:15.189 17:39:49 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:15.189 17:39:49 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:15.189 17:39:49 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:15.189 17:39:49 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:15.189 17:39:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:15.189 17:39:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:15.189 17:39:49 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:15.189 17:39:49 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:15.189 17:39:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.189 17:39:49 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.102 00:04:17.102 real 0m3.477s 00:04:17.102 user 0m0.996s 00:04:17.102 sys 0m1.660s 00:04:17.102 17:39:51 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:17.102 17:39:51 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:17.102 ************************************ 00:04:17.102 END TEST denied 00:04:17.102 ************************************ 00:04:17.102 17:39:51 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:17.102 17:39:51 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:17.102 17:39:51 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.102 17:39:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:17.102 ************************************ 00:04:17.102 START TEST allowed 00:04:17.102 ************************************ 00:04:17.102 17:39:51 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:17.102 17:39:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:17.102 17:39:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:17.102 17:39:51 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:17.102 17:39:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.102 17:39:51 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.624 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:19.624 17:39:54 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:19.624 17:39:54 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:19.624 17:39:54 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:19.624 17:39:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.624 17:39:54 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.993 00:04:20.993 real 0m3.864s 00:04:20.993 user 0m1.038s 00:04:20.993 sys 0m1.720s 00:04:20.993 17:39:55 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:20.993 17:39:55 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:20.993 ************************************ 00:04:20.993 END TEST allowed 00:04:20.993 ************************************ 00:04:20.993 00:04:20.993 real 0m10.020s 00:04:20.993 user 0m3.064s 00:04:20.993 sys 0m5.107s 00:04:20.993 17:39:55 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:20.993 17:39:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:20.993 ************************************ 00:04:20.993 END TEST acl 00:04:20.993 ************************************ 00:04:20.993 17:39:55 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:20.993 17:39:55 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:20.993 17:39:55 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:20.993 17:39:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.251 ************************************ 00:04:21.251 START TEST hugepages 00:04:21.251 ************************************ 00:04:21.251 17:39:55 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:21.251 * Looking for test storage... 00:04:21.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 41625368 kB' 'MemAvailable: 45139696 kB' 'Buffers: 2704 kB' 'Cached: 12302292 kB' 'SwapCached: 0 kB' 'Active: 9287220 kB' 'Inactive: 3508168 kB' 'Active(anon): 8891640 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493652 kB' 'Mapped: 185656 kB' 'Shmem: 8401248 kB' 'KReclaimable: 208836 kB' 'Slab: 599072 kB' 'SReclaimable: 208836 kB' 'SUnreclaim: 390236 kB' 'KernelStack: 13056 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562312 kB' 'Committed_AS: 10069016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.251 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:21.252 17:39:55 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:21.252 17:39:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:21.252 17:39:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.252 17:39:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.252 ************************************ 00:04:21.252 START TEST default_setup 00:04:21.252 ************************************ 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.252 17:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.641 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:22.641 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:22.641 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:22.641 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:22.641 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:22.641 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:22.641 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:22.641 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:22.641 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:22.641 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:22.641 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:22.641 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:22.641 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:22.641 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:22.641 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:22.641 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:23.577 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.577 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43735548 kB' 'MemAvailable: 47249872 kB' 'Buffers: 2704 kB' 'Cached: 12302384 kB' 'SwapCached: 0 kB' 'Active: 9310696 kB' 'Inactive: 3508168 kB' 'Active(anon): 8915116 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516960 kB' 'Mapped: 186136 kB' 'Shmem: 8401340 kB' 'KReclaimable: 208832 kB' 'Slab: 598096 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389264 kB' 'KernelStack: 12896 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10095492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.578 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43737832 kB' 'MemAvailable: 47252156 kB' 'Buffers: 2704 kB' 'Cached: 12302384 kB' 'SwapCached: 0 kB' 'Active: 9306848 kB' 'Inactive: 3508168 kB' 'Active(anon): 8911268 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513192 kB' 'Mapped: 186524 kB' 'Shmem: 8401340 kB' 'KReclaimable: 208832 kB' 'Slab: 598156 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389324 kB' 'KernelStack: 12928 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10091008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197208 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.579 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.580 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43735760 kB' 'MemAvailable: 47250084 kB' 'Buffers: 2704 kB' 'Cached: 12302404 kB' 'SwapCached: 0 kB' 'Active: 9310436 kB' 'Inactive: 3508168 kB' 'Active(anon): 8914856 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516772 kB' 'Mapped: 186112 kB' 'Shmem: 8401360 kB' 'KReclaimable: 208832 kB' 'Slab: 598260 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389428 kB' 'KernelStack: 12944 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10095532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.581 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.582 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.583 nr_hugepages=1024 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.583 resv_hugepages=0 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.583 surplus_hugepages=0 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.583 anon_hugepages=0 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43737896 kB' 'MemAvailable: 47252220 kB' 'Buffers: 2704 kB' 'Cached: 12302424 kB' 'SwapCached: 0 kB' 'Active: 9305208 kB' 'Inactive: 3508168 kB' 'Active(anon): 8909628 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511500 kB' 'Mapped: 185676 kB' 'Shmem: 8401380 kB' 'KReclaimable: 208832 kB' 'Slab: 598260 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389428 kB' 'KernelStack: 12928 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10089432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197208 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.584 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19026920 kB' 'MemUsed: 13850020 kB' 'SwapCached: 0 kB' 'Active: 7219440 kB' 'Inactive: 3325912 kB' 'Active(anon): 6960428 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3325912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10261236 kB' 'Mapped: 111272 kB' 'AnonPages: 287292 kB' 'Shmem: 6676312 kB' 'KernelStack: 6152 kB' 'PageTables: 3592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116892 kB' 'Slab: 340004 kB' 'SReclaimable: 116892 kB' 'SUnreclaim: 223112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.586 node0=1024 expecting 1024 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.586 00:04:23.586 real 0m2.414s 00:04:23.586 user 0m0.584s 00:04:23.586 sys 0m0.797s 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.586 17:39:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:23.586 ************************************ 00:04:23.586 END TEST default_setup 00:04:23.586 ************************************ 00:04:23.878 17:39:58 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:23.878 17:39:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.878 17:39:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.878 17:39:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.878 ************************************ 00:04:23.878 START TEST per_node_1G_alloc 00:04:23.878 ************************************ 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.878 17:39:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.808 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.808 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:24.808 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.808 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.808 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.808 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.808 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.808 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.808 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:24.808 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.808 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.808 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.808 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.808 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.808 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.808 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.808 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43730080 kB' 'MemAvailable: 47244404 kB' 'Buffers: 2704 kB' 'Cached: 12302496 kB' 'SwapCached: 0 kB' 'Active: 9305616 kB' 'Inactive: 3508168 kB' 'Active(anon): 8910036 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511860 kB' 'Mapped: 185792 kB' 'Shmem: 8401452 kB' 'KReclaimable: 208832 kB' 'Slab: 598144 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389312 kB' 'KernelStack: 12944 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10089620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197224 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.070 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.071 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43729600 kB' 'MemAvailable: 47243924 kB' 'Buffers: 2704 kB' 'Cached: 12302500 kB' 'SwapCached: 0 kB' 'Active: 9305436 kB' 'Inactive: 3508168 kB' 'Active(anon): 8909856 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511636 kB' 'Mapped: 185700 kB' 'Shmem: 8401456 kB' 'KReclaimable: 208832 kB' 'Slab: 598140 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389308 kB' 'KernelStack: 12976 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10089636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197176 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.072 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.073 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43729600 kB' 'MemAvailable: 47243924 kB' 'Buffers: 2704 kB' 'Cached: 12302516 kB' 'SwapCached: 0 kB' 'Active: 9305776 kB' 'Inactive: 3508168 kB' 'Active(anon): 8910196 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511956 kB' 'Mapped: 185700 kB' 'Shmem: 8401472 kB' 'KReclaimable: 208832 kB' 'Slab: 598140 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389308 kB' 'KernelStack: 13008 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10089660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.074 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.075 nr_hugepages=1024 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.075 resv_hugepages=0 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.075 surplus_hugepages=0 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.075 anon_hugepages=0 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.075 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43730028 kB' 'MemAvailable: 47244352 kB' 'Buffers: 2704 kB' 'Cached: 12302540 kB' 'SwapCached: 0 kB' 'Active: 9305496 kB' 'Inactive: 3508168 kB' 'Active(anon): 8909916 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511640 kB' 'Mapped: 185700 kB' 'Shmem: 8401496 kB' 'KReclaimable: 208832 kB' 'Slab: 598140 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389308 kB' 'KernelStack: 12992 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10089684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.076 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20073908 kB' 'MemUsed: 12803032 kB' 'SwapCached: 0 kB' 'Active: 7219208 kB' 'Inactive: 3325912 kB' 'Active(anon): 6960196 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3325912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10261236 kB' 'Mapped: 111304 kB' 'AnonPages: 287000 kB' 'Shmem: 6676312 kB' 'KernelStack: 6136 kB' 'PageTables: 3548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116892 kB' 'Slab: 339992 kB' 'SReclaimable: 116892 kB' 'SUnreclaim: 223100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.077 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.078 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 23657048 kB' 'MemUsed: 4007732 kB' 'SwapCached: 0 kB' 'Active: 2086332 kB' 'Inactive: 182256 kB' 'Active(anon): 1949764 kB' 'Inactive(anon): 0 kB' 'Active(file): 136568 kB' 'Inactive(file): 182256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2044056 kB' 'Mapped: 74396 kB' 'AnonPages: 224636 kB' 'Shmem: 1725232 kB' 'KernelStack: 6856 kB' 'PageTables: 4960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91940 kB' 'Slab: 258148 kB' 'SReclaimable: 91940 kB' 'SUnreclaim: 166208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.079 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:25.080 node0=512 expecting 512 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:25.080 node1=512 expecting 512 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:25.080 00:04:25.080 real 0m1.389s 00:04:25.080 user 0m0.582s 00:04:25.080 sys 0m0.772s 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.080 17:39:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.080 ************************************ 00:04:25.080 END TEST per_node_1G_alloc 00:04:25.080 ************************************ 00:04:25.080 17:39:59 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:25.080 17:39:59 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.080 17:39:59 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.080 17:39:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.080 ************************************ 00:04:25.080 START TEST even_2G_alloc 00:04:25.080 ************************************ 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.080 17:39:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:26.456 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:26.456 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:26.456 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:26.456 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:26.456 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:26.456 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:26.456 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:26.456 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:26.456 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:26.456 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:26.456 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:26.456 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:26.456 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:26.456 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:26.456 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:26.456 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:26.456 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43735532 kB' 'MemAvailable: 47249856 kB' 'Buffers: 2704 kB' 'Cached: 12302632 kB' 'SwapCached: 0 kB' 'Active: 9307776 kB' 'Inactive: 3508168 kB' 'Active(anon): 8912196 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513952 kB' 'Mapped: 185712 kB' 'Shmem: 8401588 kB' 'KReclaimable: 208832 kB' 'Slab: 597920 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389088 kB' 'KernelStack: 13024 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10089672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197352 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.456 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43737956 kB' 'MemAvailable: 47252280 kB' 'Buffers: 2704 kB' 'Cached: 12302632 kB' 'SwapCached: 0 kB' 'Active: 9307528 kB' 'Inactive: 3508168 kB' 'Active(anon): 8911948 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513700 kB' 'Mapped: 185784 kB' 'Shmem: 8401588 kB' 'KReclaimable: 208832 kB' 'Slab: 597992 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389160 kB' 'KernelStack: 12976 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10089692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197272 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.457 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43738112 kB' 'MemAvailable: 47252436 kB' 'Buffers: 2704 kB' 'Cached: 12302652 kB' 'SwapCached: 0 kB' 'Active: 9306792 kB' 'Inactive: 3508168 kB' 'Active(anon): 8911212 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512916 kB' 'Mapped: 185708 kB' 'Shmem: 8401608 kB' 'KReclaimable: 208832 kB' 'Slab: 597964 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389132 kB' 'KernelStack: 12992 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10089716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197272 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.458 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.459 nr_hugepages=1024 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.459 resv_hugepages=0 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.459 surplus_hugepages=0 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.459 anon_hugepages=0 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43738680 kB' 'MemAvailable: 47253004 kB' 'Buffers: 2704 kB' 'Cached: 12302680 kB' 'SwapCached: 0 kB' 'Active: 9307000 kB' 'Inactive: 3508168 kB' 'Active(anon): 8911420 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513112 kB' 'Mapped: 185708 kB' 'Shmem: 8401636 kB' 'KReclaimable: 208832 kB' 'Slab: 597964 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389132 kB' 'KernelStack: 13008 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10090108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197288 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.459 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20072976 kB' 'MemUsed: 12803964 kB' 'SwapCached: 0 kB' 'Active: 7219892 kB' 'Inactive: 3325912 kB' 'Active(anon): 6960880 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3325912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10261240 kB' 'Mapped: 111308 kB' 'AnonPages: 287712 kB' 'Shmem: 6676316 kB' 'KernelStack: 6152 kB' 'PageTables: 3552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116892 kB' 'Slab: 339904 kB' 'SReclaimable: 116892 kB' 'SUnreclaim: 223012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.460 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.719 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 23672796 kB' 'MemUsed: 3991984 kB' 'SwapCached: 0 kB' 'Active: 2085304 kB' 'Inactive: 182256 kB' 'Active(anon): 1948736 kB' 'Inactive(anon): 0 kB' 'Active(file): 136568 kB' 'Inactive(file): 182256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2044184 kB' 'Mapped: 74400 kB' 'AnonPages: 223500 kB' 'Shmem: 1725360 kB' 'KernelStack: 6808 kB' 'PageTables: 4796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91940 kB' 'Slab: 258052 kB' 'SReclaimable: 91940 kB' 'SUnreclaim: 166112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.720 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:26.721 node0=512 expecting 512 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:26.721 node1=512 expecting 512 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:26.721 00:04:26.721 real 0m1.444s 00:04:26.721 user 0m0.600s 00:04:26.721 sys 0m0.811s 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:26.721 17:40:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:26.721 ************************************ 00:04:26.721 END TEST even_2G_alloc 00:04:26.721 ************************************ 00:04:26.721 17:40:01 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:26.721 17:40:01 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:26.721 17:40:01 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:26.721 17:40:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.721 ************************************ 00:04:26.721 START TEST odd_alloc 00:04:26.721 ************************************ 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.721 17:40:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.671 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:27.671 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:27.671 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:27.671 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:27.671 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:27.671 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:27.671 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:27.671 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:27.671 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:27.671 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:27.671 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:27.671 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:27.671 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:27.671 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:27.671 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:27.940 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:27.940 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43756972 kB' 'MemAvailable: 47271296 kB' 'Buffers: 2704 kB' 'Cached: 12302772 kB' 'SwapCached: 0 kB' 'Active: 9300728 kB' 'Inactive: 3508168 kB' 'Active(anon): 8905148 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506616 kB' 'Mapped: 184856 kB' 'Shmem: 8401728 kB' 'KReclaimable: 208832 kB' 'Slab: 597668 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 388836 kB' 'KernelStack: 12832 kB' 'PageTables: 7716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10063400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.940 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43760896 kB' 'MemAvailable: 47275220 kB' 'Buffers: 2704 kB' 'Cached: 12302776 kB' 'SwapCached: 0 kB' 'Active: 9301340 kB' 'Inactive: 3508168 kB' 'Active(anon): 8905760 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507288 kB' 'Mapped: 184856 kB' 'Shmem: 8401732 kB' 'KReclaimable: 208832 kB' 'Slab: 597668 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 388836 kB' 'KernelStack: 12880 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10064588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197224 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.941 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43762512 kB' 'MemAvailable: 47276836 kB' 'Buffers: 2704 kB' 'Cached: 12302780 kB' 'SwapCached: 0 kB' 'Active: 9301312 kB' 'Inactive: 3508168 kB' 'Active(anon): 8905732 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506776 kB' 'Mapped: 184868 kB' 'Shmem: 8401736 kB' 'KReclaimable: 208832 kB' 'Slab: 597732 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 388900 kB' 'KernelStack: 13024 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10064436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197320 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.942 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:27.943 nr_hugepages=1025 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.943 resv_hugepages=0 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.943 surplus_hugepages=0 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.943 anon_hugepages=0 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43760708 kB' 'MemAvailable: 47275032 kB' 'Buffers: 2704 kB' 'Cached: 12302800 kB' 'SwapCached: 0 kB' 'Active: 9301896 kB' 'Inactive: 3508168 kB' 'Active(anon): 8906316 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507788 kB' 'Mapped: 184868 kB' 'Shmem: 8401756 kB' 'KReclaimable: 208832 kB' 'Slab: 597732 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 388900 kB' 'KernelStack: 13360 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10065824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197560 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.943 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:27.944 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20074512 kB' 'MemUsed: 12802428 kB' 'SwapCached: 0 kB' 'Active: 7219676 kB' 'Inactive: 3325912 kB' 'Active(anon): 6960664 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3325912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10261248 kB' 'Mapped: 110768 kB' 'AnonPages: 287496 kB' 'Shmem: 6676324 kB' 'KernelStack: 6840 kB' 'PageTables: 5744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116892 kB' 'Slab: 339832 kB' 'SReclaimable: 116892 kB' 'SUnreclaim: 222940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.945 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.204 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 23686084 kB' 'MemUsed: 3978696 kB' 'SwapCached: 0 kB' 'Active: 2083712 kB' 'Inactive: 182256 kB' 'Active(anon): 1947144 kB' 'Inactive(anon): 0 kB' 'Active(file): 136568 kB' 'Inactive(file): 182256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2044296 kB' 'Mapped: 74100 kB' 'AnonPages: 221724 kB' 'Shmem: 1725472 kB' 'KernelStack: 6712 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91940 kB' 'Slab: 257900 kB' 'SReclaimable: 91940 kB' 'SUnreclaim: 165960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.205 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:28.206 node0=512 expecting 513 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:28.206 node1=513 expecting 512 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:28.206 00:04:28.206 real 0m1.418s 00:04:28.206 user 0m0.613s 00:04:28.206 sys 0m0.769s 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:28.206 17:40:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:28.206 ************************************ 00:04:28.206 END TEST odd_alloc 00:04:28.206 ************************************ 00:04:28.206 17:40:02 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:28.206 17:40:02 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:28.206 17:40:02 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.206 17:40:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:28.206 ************************************ 00:04:28.206 START TEST custom_alloc 00:04:28.206 ************************************ 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:28.206 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:28.207 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:28.207 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:28.207 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:28.207 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:28.207 17:40:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:28.207 17:40:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.207 17:40:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.137 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.137 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:29.137 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:29.137 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:29.137 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:29.137 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:29.137 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:29.137 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:29.137 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:29.137 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:29.137 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:29.137 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:29.137 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:29.137 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:29.137 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:29.137 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:29.137 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:29.399 17:40:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:29.399 17:40:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:29.399 17:40:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.399 17:40:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.399 17:40:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.399 17:40:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.399 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42711468 kB' 'MemAvailable: 46225792 kB' 'Buffers: 2704 kB' 'Cached: 12302896 kB' 'SwapCached: 0 kB' 'Active: 9301596 kB' 'Inactive: 3508168 kB' 'Active(anon): 8906016 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507404 kB' 'Mapped: 185832 kB' 'Shmem: 8401852 kB' 'KReclaimable: 208832 kB' 'Slab: 597856 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389024 kB' 'KernelStack: 12896 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10065172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42713464 kB' 'MemAvailable: 46227788 kB' 'Buffers: 2704 kB' 'Cached: 12302900 kB' 'SwapCached: 0 kB' 'Active: 9304476 kB' 'Inactive: 3508168 kB' 'Active(anon): 8908896 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510332 kB' 'Mapped: 185352 kB' 'Shmem: 8401856 kB' 'KReclaimable: 208832 kB' 'Slab: 597852 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389020 kB' 'KernelStack: 12960 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10068092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.400 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.401 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42709860 kB' 'MemAvailable: 46224184 kB' 'Buffers: 2704 kB' 'Cached: 12302920 kB' 'SwapCached: 0 kB' 'Active: 9306484 kB' 'Inactive: 3508168 kB' 'Active(anon): 8910904 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512308 kB' 'Mapped: 185692 kB' 'Shmem: 8401876 kB' 'KReclaimable: 208832 kB' 'Slab: 597884 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389052 kB' 'KernelStack: 12960 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10069840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197196 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.402 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:29.403 nr_hugepages=1536 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.403 resv_hugepages=0 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.403 surplus_hugepages=0 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.403 anon_hugepages=0 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42710040 kB' 'MemAvailable: 46224364 kB' 'Buffers: 2704 kB' 'Cached: 12302940 kB' 'SwapCached: 0 kB' 'Active: 9300812 kB' 'Inactive: 3508168 kB' 'Active(anon): 8905232 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506604 kB' 'Mapped: 185256 kB' 'Shmem: 8401896 kB' 'KReclaimable: 208832 kB' 'Slab: 597884 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 389052 kB' 'KernelStack: 12944 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10063744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197208 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.403 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20085768 kB' 'MemUsed: 12791172 kB' 'SwapCached: 0 kB' 'Active: 7217684 kB' 'Inactive: 3325912 kB' 'Active(anon): 6958672 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3325912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10261252 kB' 'Mapped: 110760 kB' 'AnonPages: 285536 kB' 'Shmem: 6676328 kB' 'KernelStack: 6248 kB' 'PageTables: 3532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116892 kB' 'Slab: 339856 kB' 'SReclaimable: 116892 kB' 'SUnreclaim: 222964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.404 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 22624020 kB' 'MemUsed: 5040760 kB' 'SwapCached: 0 kB' 'Active: 2083136 kB' 'Inactive: 182256 kB' 'Active(anon): 1946568 kB' 'Inactive(anon): 0 kB' 'Active(file): 136568 kB' 'Inactive(file): 182256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2044432 kB' 'Mapped: 74080 kB' 'AnonPages: 221044 kB' 'Shmem: 1725608 kB' 'KernelStack: 6696 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91940 kB' 'Slab: 258028 kB' 'SReclaimable: 91940 kB' 'SUnreclaim: 166088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.405 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.406 node0=512 expecting 512 00:04:29.406 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.406 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.406 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.406 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:29.406 node1=1024 expecting 1024 00:04:29.406 17:40:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:29.406 00:04:29.406 real 0m1.360s 00:04:29.406 user 0m0.570s 00:04:29.406 sys 0m0.754s 00:04:29.406 17:40:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:29.406 17:40:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.406 ************************************ 00:04:29.406 END TEST custom_alloc 00:04:29.406 ************************************ 00:04:29.406 17:40:04 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:29.406 17:40:04 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.406 17:40:04 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.406 17:40:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.662 ************************************ 00:04:29.662 START TEST no_shrink_alloc 00:04:29.662 ************************************ 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:29.662 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.663 17:40:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.593 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:30.593 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:30.593 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:30.593 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:30.593 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:30.593 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:30.593 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:30.593 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:30.593 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:30.593 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:30.593 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:30.593 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:30.593 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:30.593 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:30.593 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:30.593 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:30.593 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43753448 kB' 'MemAvailable: 47267772 kB' 'Buffers: 2704 kB' 'Cached: 12303028 kB' 'SwapCached: 0 kB' 'Active: 9301288 kB' 'Inactive: 3508168 kB' 'Active(anon): 8905708 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506864 kB' 'Mapped: 184860 kB' 'Shmem: 8401984 kB' 'KReclaimable: 208832 kB' 'Slab: 597600 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 388768 kB' 'KernelStack: 12944 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10063940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197304 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.855 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43753680 kB' 'MemAvailable: 47268004 kB' 'Buffers: 2704 kB' 'Cached: 12303032 kB' 'SwapCached: 0 kB' 'Active: 9301288 kB' 'Inactive: 3508168 kB' 'Active(anon): 8905708 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506924 kB' 'Mapped: 184936 kB' 'Shmem: 8401988 kB' 'KReclaimable: 208832 kB' 'Slab: 597592 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 388760 kB' 'KernelStack: 12976 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10063960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197272 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.856 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.857 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43753176 kB' 'MemAvailable: 47267500 kB' 'Buffers: 2704 kB' 'Cached: 12303048 kB' 'SwapCached: 0 kB' 'Active: 9301812 kB' 'Inactive: 3508168 kB' 'Active(anon): 8906232 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506980 kB' 'Mapped: 184860 kB' 'Shmem: 8402004 kB' 'KReclaimable: 208832 kB' 'Slab: 597592 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 388760 kB' 'KernelStack: 13008 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10069696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197304 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.858 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.859 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.860 nr_hugepages=1024 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.860 resv_hugepages=0 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.860 surplus_hugepages=0 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.860 anon_hugepages=0 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43753176 kB' 'MemAvailable: 47267500 kB' 'Buffers: 2704 kB' 'Cached: 12303072 kB' 'SwapCached: 0 kB' 'Active: 9300912 kB' 'Inactive: 3508168 kB' 'Active(anon): 8905332 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506476 kB' 'Mapped: 184860 kB' 'Shmem: 8402028 kB' 'KReclaimable: 208832 kB' 'Slab: 597584 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 388752 kB' 'KernelStack: 12928 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10064392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197256 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.860 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19045756 kB' 'MemUsed: 13831184 kB' 'SwapCached: 0 kB' 'Active: 7218272 kB' 'Inactive: 3325912 kB' 'Active(anon): 6959260 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3325912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10261344 kB' 'Mapped: 110780 kB' 'AnonPages: 286028 kB' 'Shmem: 6676420 kB' 'KernelStack: 6232 kB' 'PageTables: 3432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116892 kB' 'Slab: 339784 kB' 'SReclaimable: 116892 kB' 'SUnreclaim: 222892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.861 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.862 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.863 node0=1024 expecting 1024 00:04:30.863 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.863 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:30.863 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:30.863 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:30.863 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.863 17:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:31.795 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:31.795 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:31.795 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:31.795 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:31.795 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:31.795 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:31.795 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:31.795 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:31.795 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:31.795 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:31.795 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:31.795 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:31.795 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:31.795 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:31.795 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:31.795 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:31.795 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:32.059 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43743024 kB' 'MemAvailable: 47257348 kB' 'Buffers: 2704 kB' 'Cached: 12303140 kB' 'SwapCached: 0 kB' 'Active: 9301940 kB' 'Inactive: 3508168 kB' 'Active(anon): 8906360 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507440 kB' 'Mapped: 184892 kB' 'Shmem: 8402096 kB' 'KReclaimable: 208832 kB' 'Slab: 597616 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 388784 kB' 'KernelStack: 12976 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10064180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197288 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.060 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43746368 kB' 'MemAvailable: 47260692 kB' 'Buffers: 2704 kB' 'Cached: 12303144 kB' 'SwapCached: 0 kB' 'Active: 9301352 kB' 'Inactive: 3508168 kB' 'Active(anon): 8905772 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506832 kB' 'Mapped: 184980 kB' 'Shmem: 8402100 kB' 'KReclaimable: 208832 kB' 'Slab: 597656 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 388824 kB' 'KernelStack: 12944 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10064200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197272 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.061 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43746388 kB' 'MemAvailable: 47260712 kB' 'Buffers: 2704 kB' 'Cached: 12303160 kB' 'SwapCached: 0 kB' 'Active: 9301452 kB' 'Inactive: 3508168 kB' 'Active(anon): 8905872 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506936 kB' 'Mapped: 184872 kB' 'Shmem: 8402116 kB' 'KReclaimable: 208832 kB' 'Slab: 597656 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 388824 kB' 'KernelStack: 12992 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10064220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197288 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.064 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.065 nr_hugepages=1024 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.065 resv_hugepages=0 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.065 surplus_hugepages=0 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.065 anon_hugepages=0 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43746136 kB' 'MemAvailable: 47260460 kB' 'Buffers: 2704 kB' 'Cached: 12303184 kB' 'SwapCached: 0 kB' 'Active: 9301440 kB' 'Inactive: 3508168 kB' 'Active(anon): 8905860 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506940 kB' 'Mapped: 184872 kB' 'Shmem: 8402140 kB' 'KReclaimable: 208832 kB' 'Slab: 597656 kB' 'SReclaimable: 208832 kB' 'SUnreclaim: 388824 kB' 'KernelStack: 12992 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10064244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197288 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.065 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.066 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19035280 kB' 'MemUsed: 13841660 kB' 'SwapCached: 0 kB' 'Active: 7217884 kB' 'Inactive: 3325912 kB' 'Active(anon): 6958872 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3325912 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10261428 kB' 'Mapped: 110788 kB' 'AnonPages: 285536 kB' 'Shmem: 6676504 kB' 'KernelStack: 6280 kB' 'PageTables: 3572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116892 kB' 'Slab: 339840 kB' 'SReclaimable: 116892 kB' 'SUnreclaim: 222948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.067 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:32.068 node0=1024 expecting 1024 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:32.068 00:04:32.068 real 0m2.603s 00:04:32.068 user 0m1.073s 00:04:32.068 sys 0m1.454s 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.068 17:40:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:32.068 ************************************ 00:04:32.068 END TEST no_shrink_alloc 00:04:32.068 ************************************ 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:32.068 17:40:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:32.068 00:04:32.068 real 0m11.025s 00:04:32.068 user 0m4.207s 00:04:32.068 sys 0m5.588s 00:04:32.068 17:40:06 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.068 17:40:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.068 ************************************ 00:04:32.068 END TEST hugepages 00:04:32.068 ************************************ 00:04:32.326 17:40:06 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:32.326 17:40:06 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.326 17:40:06 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.326 17:40:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.326 ************************************ 00:04:32.326 START TEST driver 00:04:32.326 ************************************ 00:04:32.326 17:40:06 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:32.326 * Looking for test storage... 00:04:32.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:32.326 17:40:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:32.326 17:40:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.326 17:40:06 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:34.853 17:40:09 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:34.853 17:40:09 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:34.853 17:40:09 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.853 17:40:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:34.853 ************************************ 00:04:34.853 START TEST guess_driver 00:04:34.853 ************************************ 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:34.853 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:34.853 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:34.853 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:34.853 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:34.853 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:34.853 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:34.853 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:34.853 Looking for driver=vfio-pci 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.853 17:40:09 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.800 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.801 17:40:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.769 17:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.769 17:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.769 17:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.039 17:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:37.039 17:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:37.039 17:40:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.039 17:40:11 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.560 00:04:39.560 real 0m4.595s 00:04:39.560 user 0m0.988s 00:04:39.560 sys 0m1.689s 00:04:39.560 17:40:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.560 17:40:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:39.560 ************************************ 00:04:39.560 END TEST guess_driver 00:04:39.560 ************************************ 00:04:39.560 00:04:39.560 real 0m6.984s 00:04:39.560 user 0m1.547s 00:04:39.560 sys 0m2.652s 00:04:39.560 17:40:13 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.560 17:40:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:39.560 ************************************ 00:04:39.560 END TEST driver 00:04:39.560 ************************************ 00:04:39.560 17:40:13 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:39.560 17:40:13 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.560 17:40:13 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.560 17:40:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.560 ************************************ 00:04:39.560 START TEST devices 00:04:39.560 ************************************ 00:04:39.561 17:40:13 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:39.561 * Looking for test storage... 00:04:39.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:39.561 17:40:13 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:39.561 17:40:13 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:39.561 17:40:13 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.561 17:40:13 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:40.931 17:40:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:40.931 17:40:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:40.931 17:40:15 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:40.931 17:40:15 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:40.931 17:40:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:40.931 17:40:15 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:40.931 17:40:15 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:40.931 17:40:15 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:40.931 17:40:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:40.931 17:40:15 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:40.931 No valid GPT data, bailing 00:04:40.931 17:40:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:40.931 17:40:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:40.931 17:40:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:40.931 17:40:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:40.931 17:40:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:40.931 17:40:15 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:40.931 17:40:15 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:40.931 17:40:15 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.931 17:40:15 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.931 17:40:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:40.931 ************************************ 00:04:40.931 START TEST nvme_mount 00:04:40.931 ************************************ 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:40.931 17:40:15 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:41.862 Creating new GPT entries in memory. 00:04:41.862 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.862 other utilities. 00:04:41.862 17:40:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.862 17:40:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.862 17:40:16 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.862 17:40:16 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.862 17:40:16 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:42.792 Creating new GPT entries in memory. 00:04:42.792 The operation has completed successfully. 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 804535 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.792 17:40:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.793 17:40:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.170 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:44.171 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:44.171 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:44.428 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:44.428 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:44.428 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:44.428 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:44.428 17:40:18 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:44.428 17:40:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:44.428 17:40:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.428 17:40:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:44.428 17:40:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:44.428 17:40:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.428 17:40:19 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.428 17:40:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:44.428 17:40:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:44.428 17:40:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.428 17:40:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.428 17:40:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.428 17:40:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.428 17:40:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:44.428 17:40:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.428 17:40:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.428 17:40:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:44.428 17:40:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.429 17:40:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.429 17:40:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.359 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.616 17:40:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.547 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.804 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.804 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:46.804 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:46.804 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:46.805 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.805 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.805 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.805 17:40:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.805 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.805 00:04:46.805 real 0m6.096s 00:04:46.805 user 0m1.406s 00:04:46.805 sys 0m2.282s 00:04:46.805 17:40:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.805 17:40:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:46.805 ************************************ 00:04:46.805 END TEST nvme_mount 00:04:46.805 ************************************ 00:04:46.805 17:40:21 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:46.805 17:40:21 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.805 17:40:21 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.805 17:40:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:46.805 ************************************ 00:04:46.805 START TEST dm_mount 00:04:46.805 ************************************ 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:46.805 17:40:21 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:48.177 Creating new GPT entries in memory. 00:04:48.177 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:48.177 other utilities. 00:04:48.177 17:40:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:48.177 17:40:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.177 17:40:22 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.177 17:40:22 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.177 17:40:22 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:49.108 Creating new GPT entries in memory. 00:04:49.108 The operation has completed successfully. 00:04:49.108 17:40:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:49.109 17:40:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.109 17:40:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:49.109 17:40:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:49.109 17:40:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:50.043 The operation has completed successfully. 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 806800 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.043 17:40:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:50.975 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.232 17:40:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.164 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.165 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.165 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.165 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.165 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.165 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.165 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.165 17:40:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.422 17:40:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.422 17:40:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:52.422 17:40:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:52.422 17:40:27 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:52.422 17:40:27 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.422 17:40:27 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.422 17:40:27 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:52.422 17:40:27 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.422 17:40:27 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:52.422 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.422 17:40:27 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:52.422 17:40:27 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:52.422 00:04:52.422 real 0m5.604s 00:04:52.422 user 0m0.918s 00:04:52.422 sys 0m1.576s 00:04:52.422 17:40:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:52.422 17:40:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:52.422 ************************************ 00:04:52.422 END TEST dm_mount 00:04:52.422 ************************************ 00:04:52.422 17:40:27 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:52.422 17:40:27 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:52.422 17:40:27 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.422 17:40:27 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.422 17:40:27 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.422 17:40:27 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.422 17:40:27 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.679 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:52.679 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:52.679 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:52.679 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:52.679 17:40:27 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:52.679 17:40:27 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.679 17:40:27 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.679 17:40:27 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.679 17:40:27 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:52.679 17:40:27 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.679 17:40:27 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:52.679 00:04:52.679 real 0m13.513s 00:04:52.680 user 0m2.945s 00:04:52.680 sys 0m4.809s 00:04:52.680 17:40:27 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:52.680 17:40:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:52.680 ************************************ 00:04:52.680 END TEST devices 00:04:52.680 ************************************ 00:04:52.680 00:04:52.680 real 0m41.776s 00:04:52.680 user 0m11.856s 00:04:52.680 sys 0m18.313s 00:04:52.680 17:40:27 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:52.680 17:40:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:52.680 ************************************ 00:04:52.680 END TEST setup.sh 00:04:52.680 ************************************ 00:04:52.680 17:40:27 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:54.077 Hugepages 00:04:54.077 node hugesize free / total 00:04:54.077 node0 1048576kB 0 / 0 00:04:54.077 node0 2048kB 2048 / 2048 00:04:54.077 node1 1048576kB 0 / 0 00:04:54.077 node1 2048kB 0 / 0 00:04:54.077 00:04:54.077 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:54.077 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:54.077 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:54.077 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:54.077 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:54.077 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:54.077 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:54.077 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:54.077 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:54.077 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:54.077 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:54.077 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:54.077 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:54.077 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:54.077 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:54.077 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:54.077 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:54.077 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:54.077 17:40:28 -- spdk/autotest.sh@130 -- # uname -s 00:04:54.077 17:40:28 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:54.077 17:40:28 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:54.077 17:40:28 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.009 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:55.009 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:55.009 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:55.009 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:55.009 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:55.009 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:55.009 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:55.009 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:55.009 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:55.009 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:55.009 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:55.268 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:55.268 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:55.268 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:55.268 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:55.268 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:56.203 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:56.203 17:40:30 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:57.577 17:40:31 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:57.577 17:40:31 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:57.577 17:40:31 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:57.577 17:40:31 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:57.577 17:40:31 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:57.577 17:40:31 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:57.577 17:40:31 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.577 17:40:31 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:57.577 17:40:31 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:57.577 17:40:32 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:57.577 17:40:32 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:04:57.577 17:40:32 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:58.509 Waiting for block devices as requested 00:04:58.509 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:58.509 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:58.509 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:58.767 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:58.767 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:58.767 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:58.767 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:58.767 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:59.025 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:59.025 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:59.025 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:59.025 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:59.282 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:59.282 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:59.282 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:59.540 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:59.540 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:59.540 17:40:34 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:59.540 17:40:34 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:59.540 17:40:34 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:59.540 17:40:34 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:04:59.540 17:40:34 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:59.540 17:40:34 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:59.540 17:40:34 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:59.540 17:40:34 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:59.540 17:40:34 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:59.540 17:40:34 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:59.540 17:40:34 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:59.540 17:40:34 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:59.540 17:40:34 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:59.540 17:40:34 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:04:59.540 17:40:34 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:59.540 17:40:34 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:59.540 17:40:34 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:59.540 17:40:34 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:59.540 17:40:34 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:59.540 17:40:34 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:59.540 17:40:34 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:59.540 17:40:34 -- common/autotest_common.sh@1553 -- # continue 00:04:59.540 17:40:34 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:59.540 17:40:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.540 17:40:34 -- common/autotest_common.sh@10 -- # set +x 00:04:59.540 17:40:34 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:59.540 17:40:34 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:59.540 17:40:34 -- common/autotest_common.sh@10 -- # set +x 00:04:59.540 17:40:34 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:00.913 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:00.913 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:00.913 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:00.913 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:00.913 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:00.913 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:00.913 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:00.913 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:00.913 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:00.913 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:00.913 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:00.913 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:00.913 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:00.913 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:00.913 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:00.913 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:01.843 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:02.101 17:40:36 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:02.101 17:40:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.101 17:40:36 -- common/autotest_common.sh@10 -- # set +x 00:05:02.101 17:40:36 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:02.101 17:40:36 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:02.101 17:40:36 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:02.101 17:40:36 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:02.101 17:40:36 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:02.101 17:40:36 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:02.101 17:40:36 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:02.101 17:40:36 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:02.101 17:40:36 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.101 17:40:36 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:02.101 17:40:36 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:02.101 17:40:36 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:02.101 17:40:36 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:02.101 17:40:36 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:02.101 17:40:36 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:02.101 17:40:36 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:02.101 17:40:36 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:02.101 17:40:36 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:02.101 17:40:36 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:05:02.101 17:40:36 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:05:02.101 17:40:36 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=812100 00:05:02.101 17:40:36 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.101 17:40:36 -- common/autotest_common.sh@1594 -- # waitforlisten 812100 00:05:02.101 17:40:36 -- common/autotest_common.sh@827 -- # '[' -z 812100 ']' 00:05:02.101 17:40:36 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.101 17:40:36 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:02.101 17:40:36 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.101 17:40:36 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:02.101 17:40:36 -- common/autotest_common.sh@10 -- # set +x 00:05:02.101 [2024-07-20 17:40:36.819041] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:02.101 [2024-07-20 17:40:36.819133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812100 ] 00:05:02.101 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.101 [2024-07-20 17:40:36.880695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.360 [2024-07-20 17:40:36.970851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.617 17:40:37 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:02.617 17:40:37 -- common/autotest_common.sh@860 -- # return 0 00:05:02.617 17:40:37 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:02.617 17:40:37 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:02.617 17:40:37 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:05.893 nvme0n1 00:05:05.893 17:40:40 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:05.893 [2024-07-20 17:40:40.521561] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:05.893 [2024-07-20 17:40:40.521614] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:05.893 request: 00:05:05.893 { 00:05:05.893 "nvme_ctrlr_name": "nvme0", 00:05:05.893 "password": "test", 00:05:05.893 "method": "bdev_nvme_opal_revert", 00:05:05.893 "req_id": 1 00:05:05.893 } 00:05:05.893 Got JSON-RPC error response 00:05:05.893 response: 00:05:05.893 { 00:05:05.893 "code": -32603, 00:05:05.893 "message": "Internal error" 00:05:05.893 } 00:05:05.893 17:40:40 -- common/autotest_common.sh@1600 -- # true 00:05:05.893 17:40:40 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:05.893 17:40:40 -- common/autotest_common.sh@1604 -- # killprocess 812100 00:05:05.893 17:40:40 -- common/autotest_common.sh@946 -- # '[' -z 812100 ']' 00:05:05.893 17:40:40 -- common/autotest_common.sh@950 -- # kill -0 812100 00:05:05.893 17:40:40 -- common/autotest_common.sh@951 -- # uname 00:05:05.893 17:40:40 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:05.893 17:40:40 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 812100 00:05:05.893 17:40:40 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:05.893 17:40:40 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:05.893 17:40:40 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 812100' 00:05:05.893 killing process with pid 812100 00:05:05.893 17:40:40 -- common/autotest_common.sh@965 -- # kill 812100 00:05:05.893 17:40:40 -- common/autotest_common.sh@970 -- # wait 812100 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.893 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.894 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.151 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.151 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.151 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.151 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.151 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.151 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.151 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:06.152 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:08.049 17:40:42 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:08.050 17:40:42 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:08.050 17:40:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:08.050 17:40:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:08.050 17:40:42 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:08.050 17:40:42 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:08.050 17:40:42 -- common/autotest_common.sh@10 -- # set +x 00:05:08.050 17:40:42 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:08.050 17:40:42 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:08.050 17:40:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.050 17:40:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.050 17:40:42 -- common/autotest_common.sh@10 -- # set +x 00:05:08.050 ************************************ 00:05:08.050 START TEST env 00:05:08.050 ************************************ 00:05:08.050 17:40:42 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:08.050 * Looking for test storage... 00:05:08.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:08.050 17:40:42 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:08.050 17:40:42 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.050 17:40:42 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.050 17:40:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.050 ************************************ 00:05:08.050 START TEST env_memory 00:05:08.050 ************************************ 00:05:08.050 17:40:42 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:08.050 00:05:08.050 00:05:08.050 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.050 http://cunit.sourceforge.net/ 00:05:08.050 00:05:08.050 00:05:08.050 Suite: memory 00:05:08.050 Test: alloc and free memory map ...[2024-07-20 17:40:42.493645] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:08.050 passed 00:05:08.050 Test: mem map translation ...[2024-07-20 17:40:42.514029] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:08.050 [2024-07-20 17:40:42.514050] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:08.050 [2024-07-20 17:40:42.514101] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:08.050 [2024-07-20 17:40:42.514114] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:08.050 passed 00:05:08.050 Test: mem map registration ...[2024-07-20 17:40:42.554687] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:08.050 [2024-07-20 17:40:42.554707] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:08.050 passed 00:05:08.050 Test: mem map adjacent registrations ...passed 00:05:08.050 00:05:08.050 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.050 suites 1 1 n/a 0 0 00:05:08.050 tests 4 4 4 0 0 00:05:08.050 asserts 152 152 152 0 n/a 00:05:08.050 00:05:08.050 Elapsed time = 0.139 seconds 00:05:08.050 00:05:08.050 real 0m0.145s 00:05:08.050 user 0m0.135s 00:05:08.050 sys 0m0.010s 00:05:08.050 17:40:42 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.050 17:40:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:08.050 ************************************ 00:05:08.050 END TEST env_memory 00:05:08.050 ************************************ 00:05:08.050 17:40:42 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.050 17:40:42 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.050 17:40:42 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.050 17:40:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.050 ************************************ 00:05:08.050 START TEST env_vtophys 00:05:08.050 ************************************ 00:05:08.050 17:40:42 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.050 EAL: lib.eal log level changed from notice to debug 00:05:08.050 EAL: Detected lcore 0 as core 0 on socket 0 00:05:08.050 EAL: Detected lcore 1 as core 1 on socket 0 00:05:08.050 EAL: Detected lcore 2 as core 2 on socket 0 00:05:08.050 EAL: Detected lcore 3 as core 3 on socket 0 00:05:08.050 EAL: Detected lcore 4 as core 4 on socket 0 00:05:08.050 EAL: Detected lcore 5 as core 5 on socket 0 00:05:08.050 EAL: Detected lcore 6 as core 8 on socket 0 00:05:08.050 EAL: Detected lcore 7 as core 9 on socket 0 00:05:08.050 EAL: Detected lcore 8 as core 10 on socket 0 00:05:08.050 EAL: Detected lcore 9 as core 11 on socket 0 00:05:08.050 EAL: Detected lcore 10 as core 12 on socket 0 00:05:08.050 EAL: Detected lcore 11 as core 13 on socket 0 00:05:08.050 EAL: Detected lcore 12 as core 0 on socket 1 00:05:08.050 EAL: Detected lcore 13 as core 1 on socket 1 00:05:08.050 EAL: Detected lcore 14 as core 2 on socket 1 00:05:08.050 EAL: Detected lcore 15 as core 3 on socket 1 00:05:08.050 EAL: Detected lcore 16 as core 4 on socket 1 00:05:08.050 EAL: Detected lcore 17 as core 5 on socket 1 00:05:08.050 EAL: Detected lcore 18 as core 8 on socket 1 00:05:08.050 EAL: Detected lcore 19 as core 9 on socket 1 00:05:08.050 EAL: Detected lcore 20 as core 10 on socket 1 00:05:08.050 EAL: Detected lcore 21 as core 11 on socket 1 00:05:08.050 EAL: Detected lcore 22 as core 12 on socket 1 00:05:08.050 EAL: Detected lcore 23 as core 13 on socket 1 00:05:08.050 EAL: Detected lcore 24 as core 0 on socket 0 00:05:08.050 EAL: Detected lcore 25 as core 1 on socket 0 00:05:08.050 EAL: Detected lcore 26 as core 2 on socket 0 00:05:08.050 EAL: Detected lcore 27 as core 3 on socket 0 00:05:08.050 EAL: Detected lcore 28 as core 4 on socket 0 00:05:08.050 EAL: Detected lcore 29 as core 5 on socket 0 00:05:08.050 EAL: Detected lcore 30 as core 8 on socket 0 00:05:08.050 EAL: Detected lcore 31 as core 9 on socket 0 00:05:08.050 EAL: Detected lcore 32 as core 10 on socket 0 00:05:08.050 EAL: Detected lcore 33 as core 11 on socket 0 00:05:08.050 EAL: Detected lcore 34 as core 12 on socket 0 00:05:08.050 EAL: Detected lcore 35 as core 13 on socket 0 00:05:08.050 EAL: Detected lcore 36 as core 0 on socket 1 00:05:08.050 EAL: Detected lcore 37 as core 1 on socket 1 00:05:08.050 EAL: Detected lcore 38 as core 2 on socket 1 00:05:08.050 EAL: Detected lcore 39 as core 3 on socket 1 00:05:08.050 EAL: Detected lcore 40 as core 4 on socket 1 00:05:08.050 EAL: Detected lcore 41 as core 5 on socket 1 00:05:08.050 EAL: Detected lcore 42 as core 8 on socket 1 00:05:08.050 EAL: Detected lcore 43 as core 9 on socket 1 00:05:08.050 EAL: Detected lcore 44 as core 10 on socket 1 00:05:08.050 EAL: Detected lcore 45 as core 11 on socket 1 00:05:08.050 EAL: Detected lcore 46 as core 12 on socket 1 00:05:08.050 EAL: Detected lcore 47 as core 13 on socket 1 00:05:08.050 EAL: Maximum logical cores by configuration: 128 00:05:08.050 EAL: Detected CPU lcores: 48 00:05:08.050 EAL: Detected NUMA nodes: 2 00:05:08.050 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:08.050 EAL: Detected shared linkage of DPDK 00:05:08.050 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:08.050 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:08.050 EAL: Registered [vdev] bus. 00:05:08.050 EAL: bus.vdev log level changed from disabled to notice 00:05:08.050 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:08.050 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:08.050 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:08.050 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:08.050 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:08.050 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:08.050 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:08.050 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:08.050 EAL: No shared files mode enabled, IPC will be disabled 00:05:08.050 EAL: No shared files mode enabled, IPC is disabled 00:05:08.050 EAL: Bus pci wants IOVA as 'DC' 00:05:08.050 EAL: Bus vdev wants IOVA as 'DC' 00:05:08.050 EAL: Buses did not request a specific IOVA mode. 00:05:08.050 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:08.050 EAL: Selected IOVA mode 'VA' 00:05:08.050 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.050 EAL: Probing VFIO support... 00:05:08.050 EAL: IOMMU type 1 (Type 1) is supported 00:05:08.050 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:08.050 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:08.050 EAL: VFIO support initialized 00:05:08.050 EAL: Ask a virtual area of 0x2e000 bytes 00:05:08.050 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:08.050 EAL: Setting up physically contiguous memory... 00:05:08.050 EAL: Setting maximum number of open files to 524288 00:05:08.050 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:08.050 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:08.050 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:08.050 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.050 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:08.050 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.050 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.050 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:08.050 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:08.050 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.050 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:08.050 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.050 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.050 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:08.050 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:08.050 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.050 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:08.050 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.050 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.050 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:08.050 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:08.050 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.050 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:08.051 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.051 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.051 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:08.051 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:08.051 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:08.051 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.051 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:08.051 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.051 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.051 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:08.051 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:08.051 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.051 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:08.051 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.051 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.051 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:08.051 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:08.051 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.051 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:08.051 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.051 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.051 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:08.051 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:08.051 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.051 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:08.051 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.051 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.051 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:08.051 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:08.051 EAL: Hugepages will be freed exactly as allocated. 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: TSC frequency is ~2700000 KHz 00:05:08.051 EAL: Main lcore 0 is ready (tid=7ff1ff61ea00;cpuset=[0]) 00:05:08.051 EAL: Trying to obtain current memory policy. 00:05:08.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.051 EAL: Restoring previous memory policy: 0 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was expanded by 2MB 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:08.051 EAL: Mem event callback 'spdk:(nil)' registered 00:05:08.051 00:05:08.051 00:05:08.051 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.051 http://cunit.sourceforge.net/ 00:05:08.051 00:05:08.051 00:05:08.051 Suite: components_suite 00:05:08.051 Test: vtophys_malloc_test ...passed 00:05:08.051 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:08.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.051 EAL: Restoring previous memory policy: 4 00:05:08.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was expanded by 4MB 00:05:08.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was shrunk by 4MB 00:05:08.051 EAL: Trying to obtain current memory policy. 00:05:08.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.051 EAL: Restoring previous memory policy: 4 00:05:08.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was expanded by 6MB 00:05:08.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was shrunk by 6MB 00:05:08.051 EAL: Trying to obtain current memory policy. 00:05:08.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.051 EAL: Restoring previous memory policy: 4 00:05:08.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was expanded by 10MB 00:05:08.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was shrunk by 10MB 00:05:08.051 EAL: Trying to obtain current memory policy. 00:05:08.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.051 EAL: Restoring previous memory policy: 4 00:05:08.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was expanded by 18MB 00:05:08.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was shrunk by 18MB 00:05:08.051 EAL: Trying to obtain current memory policy. 00:05:08.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.051 EAL: Restoring previous memory policy: 4 00:05:08.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was expanded by 34MB 00:05:08.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was shrunk by 34MB 00:05:08.051 EAL: Trying to obtain current memory policy. 00:05:08.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.051 EAL: Restoring previous memory policy: 4 00:05:08.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was expanded by 66MB 00:05:08.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was shrunk by 66MB 00:05:08.051 EAL: Trying to obtain current memory policy. 00:05:08.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.051 EAL: Restoring previous memory policy: 4 00:05:08.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.051 EAL: request: mp_malloc_sync 00:05:08.051 EAL: No shared files mode enabled, IPC is disabled 00:05:08.051 EAL: Heap on socket 0 was expanded by 130MB 00:05:08.309 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.309 EAL: request: mp_malloc_sync 00:05:08.309 EAL: No shared files mode enabled, IPC is disabled 00:05:08.309 EAL: Heap on socket 0 was shrunk by 130MB 00:05:08.309 EAL: Trying to obtain current memory policy. 00:05:08.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.309 EAL: Restoring previous memory policy: 4 00:05:08.309 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.309 EAL: request: mp_malloc_sync 00:05:08.309 EAL: No shared files mode enabled, IPC is disabled 00:05:08.309 EAL: Heap on socket 0 was expanded by 258MB 00:05:08.309 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.309 EAL: request: mp_malloc_sync 00:05:08.309 EAL: No shared files mode enabled, IPC is disabled 00:05:08.309 EAL: Heap on socket 0 was shrunk by 258MB 00:05:08.309 EAL: Trying to obtain current memory policy. 00:05:08.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.567 EAL: Restoring previous memory policy: 4 00:05:08.567 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.567 EAL: request: mp_malloc_sync 00:05:08.567 EAL: No shared files mode enabled, IPC is disabled 00:05:08.567 EAL: Heap on socket 0 was expanded by 514MB 00:05:08.567 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.824 EAL: request: mp_malloc_sync 00:05:08.824 EAL: No shared files mode enabled, IPC is disabled 00:05:08.824 EAL: Heap on socket 0 was shrunk by 514MB 00:05:08.824 EAL: Trying to obtain current memory policy. 00:05:08.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.090 EAL: Restoring previous memory policy: 4 00:05:09.090 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.090 EAL: request: mp_malloc_sync 00:05:09.090 EAL: No shared files mode enabled, IPC is disabled 00:05:09.090 EAL: Heap on socket 0 was expanded by 1026MB 00:05:09.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.372 EAL: request: mp_malloc_sync 00:05:09.372 EAL: No shared files mode enabled, IPC is disabled 00:05:09.372 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:09.372 passed 00:05:09.372 00:05:09.372 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.372 suites 1 1 n/a 0 0 00:05:09.372 tests 2 2 2 0 0 00:05:09.372 asserts 497 497 497 0 n/a 00:05:09.372 00:05:09.372 Elapsed time = 1.366 seconds 00:05:09.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.372 EAL: request: mp_malloc_sync 00:05:09.372 EAL: No shared files mode enabled, IPC is disabled 00:05:09.372 EAL: Heap on socket 0 was shrunk by 2MB 00:05:09.372 EAL: No shared files mode enabled, IPC is disabled 00:05:09.372 EAL: No shared files mode enabled, IPC is disabled 00:05:09.372 EAL: No shared files mode enabled, IPC is disabled 00:05:09.372 00:05:09.372 real 0m1.480s 00:05:09.372 user 0m0.847s 00:05:09.372 sys 0m0.598s 00:05:09.372 17:40:44 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.372 17:40:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:09.372 ************************************ 00:05:09.372 END TEST env_vtophys 00:05:09.372 ************************************ 00:05:09.631 17:40:44 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:09.631 17:40:44 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.631 17:40:44 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.631 17:40:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.631 ************************************ 00:05:09.631 START TEST env_pci 00:05:09.631 ************************************ 00:05:09.631 17:40:44 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:09.631 00:05:09.631 00:05:09.631 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.631 http://cunit.sourceforge.net/ 00:05:09.631 00:05:09.631 00:05:09.631 Suite: pci 00:05:09.631 Test: pci_hook ...[2024-07-20 17:40:44.184188] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 812996 has claimed it 00:05:09.631 EAL: Cannot find device (10000:00:01.0) 00:05:09.631 EAL: Failed to attach device on primary process 00:05:09.631 passed 00:05:09.631 00:05:09.631 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.631 suites 1 1 n/a 0 0 00:05:09.631 tests 1 1 1 0 0 00:05:09.631 asserts 25 25 25 0 n/a 00:05:09.631 00:05:09.631 Elapsed time = 0.022 seconds 00:05:09.631 00:05:09.631 real 0m0.034s 00:05:09.631 user 0m0.010s 00:05:09.631 sys 0m0.023s 00:05:09.631 17:40:44 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.631 17:40:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:09.631 ************************************ 00:05:09.631 END TEST env_pci 00:05:09.631 ************************************ 00:05:09.631 17:40:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:09.631 17:40:44 env -- env/env.sh@15 -- # uname 00:05:09.631 17:40:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:09.631 17:40:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:09.631 17:40:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.631 17:40:44 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:09.631 17:40:44 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.631 17:40:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.631 ************************************ 00:05:09.631 START TEST env_dpdk_post_init 00:05:09.631 ************************************ 00:05:09.631 17:40:44 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.631 EAL: Detected CPU lcores: 48 00:05:09.631 EAL: Detected NUMA nodes: 2 00:05:09.631 EAL: Detected shared linkage of DPDK 00:05:09.631 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.631 EAL: Selected IOVA mode 'VA' 00:05:09.631 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.631 EAL: VFIO support initialized 00:05:09.631 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.631 EAL: Using IOMMU type 1 (Type 1) 00:05:09.631 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:09.631 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:09.631 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:09.631 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:09.631 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:09.631 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:09.888 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:09.888 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:09.888 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:09.888 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:09.888 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:09.888 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:09.888 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:09.888 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:09.888 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:09.888 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:10.822 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:14.102 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:14.102 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:14.102 Starting DPDK initialization... 00:05:14.102 Starting SPDK post initialization... 00:05:14.102 SPDK NVMe probe 00:05:14.102 Attaching to 0000:88:00.0 00:05:14.102 Attached to 0000:88:00.0 00:05:14.102 Cleaning up... 00:05:14.102 00:05:14.102 real 0m4.399s 00:05:14.102 user 0m3.259s 00:05:14.102 sys 0m0.200s 00:05:14.102 17:40:48 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.102 17:40:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.102 ************************************ 00:05:14.102 END TEST env_dpdk_post_init 00:05:14.102 ************************************ 00:05:14.102 17:40:48 env -- env/env.sh@26 -- # uname 00:05:14.102 17:40:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:14.102 17:40:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:14.102 17:40:48 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.102 17:40:48 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.102 17:40:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.102 ************************************ 00:05:14.102 START TEST env_mem_callbacks 00:05:14.102 ************************************ 00:05:14.102 17:40:48 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:14.102 EAL: Detected CPU lcores: 48 00:05:14.102 EAL: Detected NUMA nodes: 2 00:05:14.102 EAL: Detected shared linkage of DPDK 00:05:14.102 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:14.102 EAL: Selected IOVA mode 'VA' 00:05:14.102 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.102 EAL: VFIO support initialized 00:05:14.102 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:14.102 00:05:14.102 00:05:14.102 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.102 http://cunit.sourceforge.net/ 00:05:14.102 00:05:14.102 00:05:14.102 Suite: memory 00:05:14.102 Test: test ... 00:05:14.102 register 0x200000200000 2097152 00:05:14.102 malloc 3145728 00:05:14.102 register 0x200000400000 4194304 00:05:14.102 buf 0x200000500000 len 3145728 PASSED 00:05:14.102 malloc 64 00:05:14.102 buf 0x2000004fff40 len 64 PASSED 00:05:14.102 malloc 4194304 00:05:14.102 register 0x200000800000 6291456 00:05:14.102 buf 0x200000a00000 len 4194304 PASSED 00:05:14.102 free 0x200000500000 3145728 00:05:14.102 free 0x2000004fff40 64 00:05:14.102 unregister 0x200000400000 4194304 PASSED 00:05:14.102 free 0x200000a00000 4194304 00:05:14.102 unregister 0x200000800000 6291456 PASSED 00:05:14.102 malloc 8388608 00:05:14.102 register 0x200000400000 10485760 00:05:14.102 buf 0x200000600000 len 8388608 PASSED 00:05:14.102 free 0x200000600000 8388608 00:05:14.102 unregister 0x200000400000 10485760 PASSED 00:05:14.102 passed 00:05:14.102 00:05:14.102 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.102 suites 1 1 n/a 0 0 00:05:14.102 tests 1 1 1 0 0 00:05:14.102 asserts 15 15 15 0 n/a 00:05:14.102 00:05:14.102 Elapsed time = 0.005 seconds 00:05:14.102 00:05:14.102 real 0m0.047s 00:05:14.102 user 0m0.010s 00:05:14.102 sys 0m0.037s 00:05:14.102 17:40:48 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.102 17:40:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:14.102 ************************************ 00:05:14.102 END TEST env_mem_callbacks 00:05:14.102 ************************************ 00:05:14.102 00:05:14.102 real 0m6.373s 00:05:14.102 user 0m4.366s 00:05:14.102 sys 0m1.047s 00:05:14.102 17:40:48 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.102 17:40:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.102 ************************************ 00:05:14.102 END TEST env 00:05:14.102 ************************************ 00:05:14.102 17:40:48 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:14.102 17:40:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.102 17:40:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.102 17:40:48 -- common/autotest_common.sh@10 -- # set +x 00:05:14.102 ************************************ 00:05:14.102 START TEST rpc 00:05:14.102 ************************************ 00:05:14.102 17:40:48 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:14.102 * Looking for test storage... 00:05:14.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:14.102 17:40:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=813647 00:05:14.102 17:40:48 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:14.102 17:40:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.102 17:40:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 813647 00:05:14.102 17:40:48 rpc -- common/autotest_common.sh@827 -- # '[' -z 813647 ']' 00:05:14.102 17:40:48 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.102 17:40:48 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:14.102 17:40:48 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.102 17:40:48 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:14.102 17:40:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.362 [2024-07-20 17:40:48.904349] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:14.362 [2024-07-20 17:40:48.904430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813647 ] 00:05:14.362 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.362 [2024-07-20 17:40:48.960951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.362 [2024-07-20 17:40:49.056124] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:14.362 [2024-07-20 17:40:49.056170] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 813647' to capture a snapshot of events at runtime. 00:05:14.362 [2024-07-20 17:40:49.056191] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:14.362 [2024-07-20 17:40:49.056202] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:14.362 [2024-07-20 17:40:49.056211] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid813647 for offline analysis/debug. 00:05:14.362 [2024-07-20 17:40:49.056248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.621 17:40:49 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:14.621 17:40:49 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:14.621 17:40:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:14.621 17:40:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:14.621 17:40:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:14.621 17:40:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:14.621 17:40:49 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.621 17:40:49 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.621 17:40:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.621 ************************************ 00:05:14.621 START TEST rpc_integrity 00:05:14.621 ************************************ 00:05:14.621 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:14.621 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.621 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.621 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.621 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.621 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.621 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:14.621 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.621 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.621 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.621 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.621 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.621 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:14.621 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.621 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.621 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.621 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.621 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.621 { 00:05:14.622 "name": "Malloc0", 00:05:14.622 "aliases": [ 00:05:14.622 "beafc680-91ae-48c2-9813-e7171666e3d8" 00:05:14.622 ], 00:05:14.622 "product_name": "Malloc disk", 00:05:14.622 "block_size": 512, 00:05:14.622 "num_blocks": 16384, 00:05:14.622 "uuid": "beafc680-91ae-48c2-9813-e7171666e3d8", 00:05:14.622 "assigned_rate_limits": { 00:05:14.622 "rw_ios_per_sec": 0, 00:05:14.622 "rw_mbytes_per_sec": 0, 00:05:14.622 "r_mbytes_per_sec": 0, 00:05:14.622 "w_mbytes_per_sec": 0 00:05:14.622 }, 00:05:14.622 "claimed": false, 00:05:14.622 "zoned": false, 00:05:14.622 "supported_io_types": { 00:05:14.622 "read": true, 00:05:14.622 "write": true, 00:05:14.622 "unmap": true, 00:05:14.622 "write_zeroes": true, 00:05:14.622 "flush": true, 00:05:14.622 "reset": true, 00:05:14.622 "compare": false, 00:05:14.622 "compare_and_write": false, 00:05:14.622 "abort": true, 00:05:14.622 "nvme_admin": false, 00:05:14.622 "nvme_io": false 00:05:14.622 }, 00:05:14.622 "memory_domains": [ 00:05:14.622 { 00:05:14.622 "dma_device_id": "system", 00:05:14.622 "dma_device_type": 1 00:05:14.622 }, 00:05:14.622 { 00:05:14.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.622 "dma_device_type": 2 00:05:14.622 } 00:05:14.622 ], 00:05:14.622 "driver_specific": {} 00:05:14.622 } 00:05:14.622 ]' 00:05:14.622 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:14.880 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.880 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:14.880 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.880 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.880 [2024-07-20 17:40:49.447754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:14.880 [2024-07-20 17:40:49.447806] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.880 [2024-07-20 17:40:49.447833] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x247e8f0 00:05:14.880 [2024-07-20 17:40:49.447850] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.880 [2024-07-20 17:40:49.449264] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.880 [2024-07-20 17:40:49.449294] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.880 Passthru0 00:05:14.880 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.880 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.880 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.880 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.880 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.880 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.880 { 00:05:14.880 "name": "Malloc0", 00:05:14.880 "aliases": [ 00:05:14.880 "beafc680-91ae-48c2-9813-e7171666e3d8" 00:05:14.880 ], 00:05:14.880 "product_name": "Malloc disk", 00:05:14.880 "block_size": 512, 00:05:14.880 "num_blocks": 16384, 00:05:14.880 "uuid": "beafc680-91ae-48c2-9813-e7171666e3d8", 00:05:14.880 "assigned_rate_limits": { 00:05:14.880 "rw_ios_per_sec": 0, 00:05:14.880 "rw_mbytes_per_sec": 0, 00:05:14.880 "r_mbytes_per_sec": 0, 00:05:14.880 "w_mbytes_per_sec": 0 00:05:14.880 }, 00:05:14.880 "claimed": true, 00:05:14.880 "claim_type": "exclusive_write", 00:05:14.880 "zoned": false, 00:05:14.880 "supported_io_types": { 00:05:14.880 "read": true, 00:05:14.880 "write": true, 00:05:14.880 "unmap": true, 00:05:14.880 "write_zeroes": true, 00:05:14.880 "flush": true, 00:05:14.880 "reset": true, 00:05:14.880 "compare": false, 00:05:14.880 "compare_and_write": false, 00:05:14.880 "abort": true, 00:05:14.880 "nvme_admin": false, 00:05:14.880 "nvme_io": false 00:05:14.880 }, 00:05:14.880 "memory_domains": [ 00:05:14.880 { 00:05:14.880 "dma_device_id": "system", 00:05:14.880 "dma_device_type": 1 00:05:14.880 }, 00:05:14.880 { 00:05:14.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.880 "dma_device_type": 2 00:05:14.880 } 00:05:14.880 ], 00:05:14.880 "driver_specific": {} 00:05:14.880 }, 00:05:14.880 { 00:05:14.880 "name": "Passthru0", 00:05:14.880 "aliases": [ 00:05:14.880 "60b3523d-a955-5f65-a2f2-ca5d0248420c" 00:05:14.880 ], 00:05:14.880 "product_name": "passthru", 00:05:14.880 "block_size": 512, 00:05:14.880 "num_blocks": 16384, 00:05:14.880 "uuid": "60b3523d-a955-5f65-a2f2-ca5d0248420c", 00:05:14.880 "assigned_rate_limits": { 00:05:14.880 "rw_ios_per_sec": 0, 00:05:14.880 "rw_mbytes_per_sec": 0, 00:05:14.880 "r_mbytes_per_sec": 0, 00:05:14.880 "w_mbytes_per_sec": 0 00:05:14.880 }, 00:05:14.880 "claimed": false, 00:05:14.880 "zoned": false, 00:05:14.880 "supported_io_types": { 00:05:14.880 "read": true, 00:05:14.880 "write": true, 00:05:14.880 "unmap": true, 00:05:14.880 "write_zeroes": true, 00:05:14.880 "flush": true, 00:05:14.880 "reset": true, 00:05:14.880 "compare": false, 00:05:14.880 "compare_and_write": false, 00:05:14.880 "abort": true, 00:05:14.880 "nvme_admin": false, 00:05:14.881 "nvme_io": false 00:05:14.881 }, 00:05:14.881 "memory_domains": [ 00:05:14.881 { 00:05:14.881 "dma_device_id": "system", 00:05:14.881 "dma_device_type": 1 00:05:14.881 }, 00:05:14.881 { 00:05:14.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.881 "dma_device_type": 2 00:05:14.881 } 00:05:14.881 ], 00:05:14.881 "driver_specific": { 00:05:14.881 "passthru": { 00:05:14.881 "name": "Passthru0", 00:05:14.881 "base_bdev_name": "Malloc0" 00:05:14.881 } 00:05:14.881 } 00:05:14.881 } 00:05:14.881 ]' 00:05:14.881 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:14.881 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.881 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.881 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.881 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.881 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.881 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:14.881 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.881 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.881 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.881 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.881 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.881 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.881 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.881 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.881 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:14.881 17:40:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.881 00:05:14.881 real 0m0.227s 00:05:14.881 user 0m0.149s 00:05:14.881 sys 0m0.021s 00:05:14.881 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.881 17:40:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.881 ************************************ 00:05:14.881 END TEST rpc_integrity 00:05:14.881 ************************************ 00:05:14.881 17:40:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:14.881 17:40:49 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.881 17:40:49 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.881 17:40:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.881 ************************************ 00:05:14.881 START TEST rpc_plugins 00:05:14.881 ************************************ 00:05:14.881 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:14.881 17:40:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:14.881 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.881 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.881 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.881 17:40:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:14.881 17:40:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:14.881 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.881 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.881 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.881 17:40:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:14.881 { 00:05:14.881 "name": "Malloc1", 00:05:14.881 "aliases": [ 00:05:14.881 "fe83d318-3007-48e4-a438-01d587c8422d" 00:05:14.881 ], 00:05:14.881 "product_name": "Malloc disk", 00:05:14.881 "block_size": 4096, 00:05:14.881 "num_blocks": 256, 00:05:14.881 "uuid": "fe83d318-3007-48e4-a438-01d587c8422d", 00:05:14.881 "assigned_rate_limits": { 00:05:14.881 "rw_ios_per_sec": 0, 00:05:14.881 "rw_mbytes_per_sec": 0, 00:05:14.881 "r_mbytes_per_sec": 0, 00:05:14.881 "w_mbytes_per_sec": 0 00:05:14.881 }, 00:05:14.881 "claimed": false, 00:05:14.881 "zoned": false, 00:05:14.881 "supported_io_types": { 00:05:14.881 "read": true, 00:05:14.881 "write": true, 00:05:14.881 "unmap": true, 00:05:14.881 "write_zeroes": true, 00:05:14.881 "flush": true, 00:05:14.881 "reset": true, 00:05:14.881 "compare": false, 00:05:14.881 "compare_and_write": false, 00:05:14.881 "abort": true, 00:05:14.881 "nvme_admin": false, 00:05:14.881 "nvme_io": false 00:05:14.881 }, 00:05:14.881 "memory_domains": [ 00:05:14.881 { 00:05:14.881 "dma_device_id": "system", 00:05:14.881 "dma_device_type": 1 00:05:14.881 }, 00:05:14.881 { 00:05:14.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.881 "dma_device_type": 2 00:05:14.881 } 00:05:14.881 ], 00:05:14.881 "driver_specific": {} 00:05:14.881 } 00:05:14.881 ]' 00:05:14.881 17:40:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:14.881 17:40:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:14.881 17:40:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:14.881 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.881 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.139 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.139 17:40:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:15.139 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.139 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.139 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.139 17:40:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:15.139 17:40:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:15.139 17:40:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:15.139 00:05:15.139 real 0m0.113s 00:05:15.139 user 0m0.077s 00:05:15.139 sys 0m0.008s 00:05:15.139 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.139 17:40:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.139 ************************************ 00:05:15.139 END TEST rpc_plugins 00:05:15.139 ************************************ 00:05:15.139 17:40:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:15.139 17:40:49 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.139 17:40:49 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.139 17:40:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.139 ************************************ 00:05:15.139 START TEST rpc_trace_cmd_test 00:05:15.139 ************************************ 00:05:15.139 17:40:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:15.139 17:40:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:15.139 17:40:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:15.139 17:40:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.139 17:40:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.139 17:40:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.139 17:40:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:15.139 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid813647", 00:05:15.139 "tpoint_group_mask": "0x8", 00:05:15.139 "iscsi_conn": { 00:05:15.139 "mask": "0x2", 00:05:15.139 "tpoint_mask": "0x0" 00:05:15.139 }, 00:05:15.139 "scsi": { 00:05:15.139 "mask": "0x4", 00:05:15.139 "tpoint_mask": "0x0" 00:05:15.139 }, 00:05:15.139 "bdev": { 00:05:15.139 "mask": "0x8", 00:05:15.139 "tpoint_mask": "0xffffffffffffffff" 00:05:15.139 }, 00:05:15.139 "nvmf_rdma": { 00:05:15.139 "mask": "0x10", 00:05:15.140 "tpoint_mask": "0x0" 00:05:15.140 }, 00:05:15.140 "nvmf_tcp": { 00:05:15.140 "mask": "0x20", 00:05:15.140 "tpoint_mask": "0x0" 00:05:15.140 }, 00:05:15.140 "ftl": { 00:05:15.140 "mask": "0x40", 00:05:15.140 "tpoint_mask": "0x0" 00:05:15.140 }, 00:05:15.140 "blobfs": { 00:05:15.140 "mask": "0x80", 00:05:15.140 "tpoint_mask": "0x0" 00:05:15.140 }, 00:05:15.140 "dsa": { 00:05:15.140 "mask": "0x200", 00:05:15.140 "tpoint_mask": "0x0" 00:05:15.140 }, 00:05:15.140 "thread": { 00:05:15.140 "mask": "0x400", 00:05:15.140 "tpoint_mask": "0x0" 00:05:15.140 }, 00:05:15.140 "nvme_pcie": { 00:05:15.140 "mask": "0x800", 00:05:15.140 "tpoint_mask": "0x0" 00:05:15.140 }, 00:05:15.140 "iaa": { 00:05:15.140 "mask": "0x1000", 00:05:15.140 "tpoint_mask": "0x0" 00:05:15.140 }, 00:05:15.140 "nvme_tcp": { 00:05:15.140 "mask": "0x2000", 00:05:15.140 "tpoint_mask": "0x0" 00:05:15.140 }, 00:05:15.140 "bdev_nvme": { 00:05:15.140 "mask": "0x4000", 00:05:15.140 "tpoint_mask": "0x0" 00:05:15.140 }, 00:05:15.140 "sock": { 00:05:15.140 "mask": "0x8000", 00:05:15.140 "tpoint_mask": "0x0" 00:05:15.140 } 00:05:15.140 }' 00:05:15.140 17:40:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:15.140 17:40:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:15.140 17:40:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:15.140 17:40:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:15.140 17:40:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:15.140 17:40:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:15.140 17:40:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:15.140 17:40:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:15.140 17:40:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:15.399 17:40:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:15.399 00:05:15.399 real 0m0.196s 00:05:15.399 user 0m0.167s 00:05:15.399 sys 0m0.019s 00:05:15.399 17:40:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.399 17:40:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.399 ************************************ 00:05:15.399 END TEST rpc_trace_cmd_test 00:05:15.399 ************************************ 00:05:15.399 17:40:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:15.399 17:40:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:15.399 17:40:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:15.399 17:40:49 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.399 17:40:49 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.399 17:40:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.399 ************************************ 00:05:15.399 START TEST rpc_daemon_integrity 00:05:15.399 ************************************ 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.399 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.399 { 00:05:15.399 "name": "Malloc2", 00:05:15.399 "aliases": [ 00:05:15.399 "64d87268-d323-478b-a1c2-1b263ea3eaf1" 00:05:15.399 ], 00:05:15.399 "product_name": "Malloc disk", 00:05:15.399 "block_size": 512, 00:05:15.399 "num_blocks": 16384, 00:05:15.399 "uuid": "64d87268-d323-478b-a1c2-1b263ea3eaf1", 00:05:15.399 "assigned_rate_limits": { 00:05:15.399 "rw_ios_per_sec": 0, 00:05:15.399 "rw_mbytes_per_sec": 0, 00:05:15.399 "r_mbytes_per_sec": 0, 00:05:15.399 "w_mbytes_per_sec": 0 00:05:15.399 }, 00:05:15.399 "claimed": false, 00:05:15.399 "zoned": false, 00:05:15.399 "supported_io_types": { 00:05:15.399 "read": true, 00:05:15.399 "write": true, 00:05:15.399 "unmap": true, 00:05:15.399 "write_zeroes": true, 00:05:15.399 "flush": true, 00:05:15.399 "reset": true, 00:05:15.399 "compare": false, 00:05:15.399 "compare_and_write": false, 00:05:15.399 "abort": true, 00:05:15.399 "nvme_admin": false, 00:05:15.399 "nvme_io": false 00:05:15.399 }, 00:05:15.399 "memory_domains": [ 00:05:15.399 { 00:05:15.399 "dma_device_id": "system", 00:05:15.399 "dma_device_type": 1 00:05:15.399 }, 00:05:15.399 { 00:05:15.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.399 "dma_device_type": 2 00:05:15.399 } 00:05:15.399 ], 00:05:15.399 "driver_specific": {} 00:05:15.399 } 00:05:15.399 ]' 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.400 [2024-07-20 17:40:50.118442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:15.400 [2024-07-20 17:40:50.118489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.400 [2024-07-20 17:40:50.118512] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2379600 00:05:15.400 [2024-07-20 17:40:50.118527] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.400 [2024-07-20 17:40:50.119989] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.400 [2024-07-20 17:40:50.120015] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.400 Passthru0 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.400 { 00:05:15.400 "name": "Malloc2", 00:05:15.400 "aliases": [ 00:05:15.400 "64d87268-d323-478b-a1c2-1b263ea3eaf1" 00:05:15.400 ], 00:05:15.400 "product_name": "Malloc disk", 00:05:15.400 "block_size": 512, 00:05:15.400 "num_blocks": 16384, 00:05:15.400 "uuid": "64d87268-d323-478b-a1c2-1b263ea3eaf1", 00:05:15.400 "assigned_rate_limits": { 00:05:15.400 "rw_ios_per_sec": 0, 00:05:15.400 "rw_mbytes_per_sec": 0, 00:05:15.400 "r_mbytes_per_sec": 0, 00:05:15.400 "w_mbytes_per_sec": 0 00:05:15.400 }, 00:05:15.400 "claimed": true, 00:05:15.400 "claim_type": "exclusive_write", 00:05:15.400 "zoned": false, 00:05:15.400 "supported_io_types": { 00:05:15.400 "read": true, 00:05:15.400 "write": true, 00:05:15.400 "unmap": true, 00:05:15.400 "write_zeroes": true, 00:05:15.400 "flush": true, 00:05:15.400 "reset": true, 00:05:15.400 "compare": false, 00:05:15.400 "compare_and_write": false, 00:05:15.400 "abort": true, 00:05:15.400 "nvme_admin": false, 00:05:15.400 "nvme_io": false 00:05:15.400 }, 00:05:15.400 "memory_domains": [ 00:05:15.400 { 00:05:15.400 "dma_device_id": "system", 00:05:15.400 "dma_device_type": 1 00:05:15.400 }, 00:05:15.400 { 00:05:15.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.400 "dma_device_type": 2 00:05:15.400 } 00:05:15.400 ], 00:05:15.400 "driver_specific": {} 00:05:15.400 }, 00:05:15.400 { 00:05:15.400 "name": "Passthru0", 00:05:15.400 "aliases": [ 00:05:15.400 "af7809a3-e643-5122-b945-9505e729f192" 00:05:15.400 ], 00:05:15.400 "product_name": "passthru", 00:05:15.400 "block_size": 512, 00:05:15.400 "num_blocks": 16384, 00:05:15.400 "uuid": "af7809a3-e643-5122-b945-9505e729f192", 00:05:15.400 "assigned_rate_limits": { 00:05:15.400 "rw_ios_per_sec": 0, 00:05:15.400 "rw_mbytes_per_sec": 0, 00:05:15.400 "r_mbytes_per_sec": 0, 00:05:15.400 "w_mbytes_per_sec": 0 00:05:15.400 }, 00:05:15.400 "claimed": false, 00:05:15.400 "zoned": false, 00:05:15.400 "supported_io_types": { 00:05:15.400 "read": true, 00:05:15.400 "write": true, 00:05:15.400 "unmap": true, 00:05:15.400 "write_zeroes": true, 00:05:15.400 "flush": true, 00:05:15.400 "reset": true, 00:05:15.400 "compare": false, 00:05:15.400 "compare_and_write": false, 00:05:15.400 "abort": true, 00:05:15.400 "nvme_admin": false, 00:05:15.400 "nvme_io": false 00:05:15.400 }, 00:05:15.400 "memory_domains": [ 00:05:15.400 { 00:05:15.400 "dma_device_id": "system", 00:05:15.400 "dma_device_type": 1 00:05:15.400 }, 00:05:15.400 { 00:05:15.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.400 "dma_device_type": 2 00:05:15.400 } 00:05:15.400 ], 00:05:15.400 "driver_specific": { 00:05:15.400 "passthru": { 00:05:15.400 "name": "Passthru0", 00:05:15.400 "base_bdev_name": "Malloc2" 00:05:15.400 } 00:05:15.400 } 00:05:15.400 } 00:05:15.400 ]' 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.400 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.659 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.659 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.659 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:15.659 17:40:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:15.659 00:05:15.659 real 0m0.226s 00:05:15.659 user 0m0.155s 00:05:15.659 sys 0m0.018s 00:05:15.659 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.659 17:40:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.659 ************************************ 00:05:15.659 END TEST rpc_daemon_integrity 00:05:15.659 ************************************ 00:05:15.659 17:40:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:15.659 17:40:50 rpc -- rpc/rpc.sh@84 -- # killprocess 813647 00:05:15.659 17:40:50 rpc -- common/autotest_common.sh@946 -- # '[' -z 813647 ']' 00:05:15.659 17:40:50 rpc -- common/autotest_common.sh@950 -- # kill -0 813647 00:05:15.659 17:40:50 rpc -- common/autotest_common.sh@951 -- # uname 00:05:15.659 17:40:50 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:15.659 17:40:50 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 813647 00:05:15.659 17:40:50 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:15.659 17:40:50 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:15.659 17:40:50 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 813647' 00:05:15.659 killing process with pid 813647 00:05:15.659 17:40:50 rpc -- common/autotest_common.sh@965 -- # kill 813647 00:05:15.659 17:40:50 rpc -- common/autotest_common.sh@970 -- # wait 813647 00:05:15.918 00:05:15.918 real 0m1.899s 00:05:15.918 user 0m2.376s 00:05:15.918 sys 0m0.593s 00:05:15.918 17:40:50 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.918 17:40:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.918 ************************************ 00:05:15.918 END TEST rpc 00:05:15.918 ************************************ 00:05:16.177 17:40:50 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:16.177 17:40:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.177 17:40:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.177 17:40:50 -- common/autotest_common.sh@10 -- # set +x 00:05:16.177 ************************************ 00:05:16.177 START TEST skip_rpc 00:05:16.177 ************************************ 00:05:16.177 17:40:50 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:16.177 * Looking for test storage... 00:05:16.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.177 17:40:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:16.177 17:40:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:16.177 17:40:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:16.177 17:40:50 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.177 17:40:50 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.177 17:40:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.177 ************************************ 00:05:16.177 START TEST skip_rpc 00:05:16.177 ************************************ 00:05:16.177 17:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:16.177 17:40:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=814082 00:05:16.177 17:40:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:16.177 17:40:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.177 17:40:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:16.177 [2024-07-20 17:40:50.884675] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:16.177 [2024-07-20 17:40:50.884739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814082 ] 00:05:16.177 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.177 [2024-07-20 17:40:50.944152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.436 [2024-07-20 17:40:51.036553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:21.710 17:40:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:21.711 17:40:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 814082 00:05:21.711 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 814082 ']' 00:05:21.711 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 814082 00:05:21.711 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:21.711 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:21.711 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 814082 00:05:21.711 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:21.711 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:21.711 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 814082' 00:05:21.711 killing process with pid 814082 00:05:21.711 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 814082 00:05:21.711 17:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 814082 00:05:21.711 00:05:21.711 real 0m5.447s 00:05:21.711 user 0m5.140s 00:05:21.711 sys 0m0.315s 00:05:21.711 17:40:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.711 17:40:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.711 ************************************ 00:05:21.711 END TEST skip_rpc 00:05:21.711 ************************************ 00:05:21.711 17:40:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:21.711 17:40:56 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.711 17:40:56 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.711 17:40:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.711 ************************************ 00:05:21.711 START TEST skip_rpc_with_json 00:05:21.711 ************************************ 00:05:21.711 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:21.711 17:40:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:21.711 17:40:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=814773 00:05:21.711 17:40:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.711 17:40:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.711 17:40:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 814773 00:05:21.711 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 814773 ']' 00:05:21.711 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.711 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.711 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.711 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.711 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.711 [2024-07-20 17:40:56.373832] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:21.711 [2024-07-20 17:40:56.373933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814773 ] 00:05:21.711 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.711 [2024-07-20 17:40:56.431819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.969 [2024-07-20 17:40:56.523461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.227 [2024-07-20 17:40:56.777399] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:22.227 request: 00:05:22.227 { 00:05:22.227 "trtype": "tcp", 00:05:22.227 "method": "nvmf_get_transports", 00:05:22.227 "req_id": 1 00:05:22.227 } 00:05:22.227 Got JSON-RPC error response 00:05:22.227 response: 00:05:22.227 { 00:05:22.227 "code": -19, 00:05:22.227 "message": "No such device" 00:05:22.227 } 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.227 [2024-07-20 17:40:56.785513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.227 17:40:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:22.227 { 00:05:22.227 "subsystems": [ 00:05:22.227 { 00:05:22.227 "subsystem": "vfio_user_target", 00:05:22.227 "config": null 00:05:22.227 }, 00:05:22.227 { 00:05:22.227 "subsystem": "keyring", 00:05:22.227 "config": [] 00:05:22.227 }, 00:05:22.227 { 00:05:22.227 "subsystem": "iobuf", 00:05:22.227 "config": [ 00:05:22.227 { 00:05:22.227 "method": "iobuf_set_options", 00:05:22.227 "params": { 00:05:22.227 "small_pool_count": 8192, 00:05:22.227 "large_pool_count": 1024, 00:05:22.227 "small_bufsize": 8192, 00:05:22.227 "large_bufsize": 135168 00:05:22.227 } 00:05:22.227 } 00:05:22.227 ] 00:05:22.227 }, 00:05:22.227 { 00:05:22.227 "subsystem": "sock", 00:05:22.227 "config": [ 00:05:22.227 { 00:05:22.227 "method": "sock_set_default_impl", 00:05:22.227 "params": { 00:05:22.227 "impl_name": "posix" 00:05:22.227 } 00:05:22.227 }, 00:05:22.227 { 00:05:22.227 "method": "sock_impl_set_options", 00:05:22.227 "params": { 00:05:22.227 "impl_name": "ssl", 00:05:22.227 "recv_buf_size": 4096, 00:05:22.227 "send_buf_size": 4096, 00:05:22.227 "enable_recv_pipe": true, 00:05:22.227 "enable_quickack": false, 00:05:22.227 "enable_placement_id": 0, 00:05:22.227 "enable_zerocopy_send_server": true, 00:05:22.227 "enable_zerocopy_send_client": false, 00:05:22.227 "zerocopy_threshold": 0, 00:05:22.227 "tls_version": 0, 00:05:22.227 "enable_ktls": false 00:05:22.227 } 00:05:22.227 }, 00:05:22.227 { 00:05:22.227 "method": "sock_impl_set_options", 00:05:22.227 "params": { 00:05:22.227 "impl_name": "posix", 00:05:22.227 "recv_buf_size": 2097152, 00:05:22.227 "send_buf_size": 2097152, 00:05:22.227 "enable_recv_pipe": true, 00:05:22.227 "enable_quickack": false, 00:05:22.227 "enable_placement_id": 0, 00:05:22.227 "enable_zerocopy_send_server": true, 00:05:22.227 "enable_zerocopy_send_client": false, 00:05:22.227 "zerocopy_threshold": 0, 00:05:22.227 "tls_version": 0, 00:05:22.227 "enable_ktls": false 00:05:22.227 } 00:05:22.227 } 00:05:22.227 ] 00:05:22.227 }, 00:05:22.227 { 00:05:22.227 "subsystem": "vmd", 00:05:22.227 "config": [] 00:05:22.227 }, 00:05:22.227 { 00:05:22.227 "subsystem": "accel", 00:05:22.227 "config": [ 00:05:22.227 { 00:05:22.228 "method": "accel_set_options", 00:05:22.228 "params": { 00:05:22.228 "small_cache_size": 128, 00:05:22.228 "large_cache_size": 16, 00:05:22.228 "task_count": 2048, 00:05:22.228 "sequence_count": 2048, 00:05:22.228 "buf_count": 2048 00:05:22.228 } 00:05:22.228 } 00:05:22.228 ] 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "subsystem": "bdev", 00:05:22.228 "config": [ 00:05:22.228 { 00:05:22.228 "method": "bdev_set_options", 00:05:22.228 "params": { 00:05:22.228 "bdev_io_pool_size": 65535, 00:05:22.228 "bdev_io_cache_size": 256, 00:05:22.228 "bdev_auto_examine": true, 00:05:22.228 "iobuf_small_cache_size": 128, 00:05:22.228 "iobuf_large_cache_size": 16 00:05:22.228 } 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "method": "bdev_raid_set_options", 00:05:22.228 "params": { 00:05:22.228 "process_window_size_kb": 1024 00:05:22.228 } 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "method": "bdev_iscsi_set_options", 00:05:22.228 "params": { 00:05:22.228 "timeout_sec": 30 00:05:22.228 } 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "method": "bdev_nvme_set_options", 00:05:22.228 "params": { 00:05:22.228 "action_on_timeout": "none", 00:05:22.228 "timeout_us": 0, 00:05:22.228 "timeout_admin_us": 0, 00:05:22.228 "keep_alive_timeout_ms": 10000, 00:05:22.228 "arbitration_burst": 0, 00:05:22.228 "low_priority_weight": 0, 00:05:22.228 "medium_priority_weight": 0, 00:05:22.228 "high_priority_weight": 0, 00:05:22.228 "nvme_adminq_poll_period_us": 10000, 00:05:22.228 "nvme_ioq_poll_period_us": 0, 00:05:22.228 "io_queue_requests": 0, 00:05:22.228 "delay_cmd_submit": true, 00:05:22.228 "transport_retry_count": 4, 00:05:22.228 "bdev_retry_count": 3, 00:05:22.228 "transport_ack_timeout": 0, 00:05:22.228 "ctrlr_loss_timeout_sec": 0, 00:05:22.228 "reconnect_delay_sec": 0, 00:05:22.228 "fast_io_fail_timeout_sec": 0, 00:05:22.228 "disable_auto_failback": false, 00:05:22.228 "generate_uuids": false, 00:05:22.228 "transport_tos": 0, 00:05:22.228 "nvme_error_stat": false, 00:05:22.228 "rdma_srq_size": 0, 00:05:22.228 "io_path_stat": false, 00:05:22.228 "allow_accel_sequence": false, 00:05:22.228 "rdma_max_cq_size": 0, 00:05:22.228 "rdma_cm_event_timeout_ms": 0, 00:05:22.228 "dhchap_digests": [ 00:05:22.228 "sha256", 00:05:22.228 "sha384", 00:05:22.228 "sha512" 00:05:22.228 ], 00:05:22.228 "dhchap_dhgroups": [ 00:05:22.228 "null", 00:05:22.228 "ffdhe2048", 00:05:22.228 "ffdhe3072", 00:05:22.228 "ffdhe4096", 00:05:22.228 "ffdhe6144", 00:05:22.228 "ffdhe8192" 00:05:22.228 ] 00:05:22.228 } 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "method": "bdev_nvme_set_hotplug", 00:05:22.228 "params": { 00:05:22.228 "period_us": 100000, 00:05:22.228 "enable": false 00:05:22.228 } 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "method": "bdev_wait_for_examine" 00:05:22.228 } 00:05:22.228 ] 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "subsystem": "scsi", 00:05:22.228 "config": null 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "subsystem": "scheduler", 00:05:22.228 "config": [ 00:05:22.228 { 00:05:22.228 "method": "framework_set_scheduler", 00:05:22.228 "params": { 00:05:22.228 "name": "static" 00:05:22.228 } 00:05:22.228 } 00:05:22.228 ] 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "subsystem": "vhost_scsi", 00:05:22.228 "config": [] 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "subsystem": "vhost_blk", 00:05:22.228 "config": [] 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "subsystem": "ublk", 00:05:22.228 "config": [] 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "subsystem": "nbd", 00:05:22.228 "config": [] 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "subsystem": "nvmf", 00:05:22.228 "config": [ 00:05:22.228 { 00:05:22.228 "method": "nvmf_set_config", 00:05:22.228 "params": { 00:05:22.228 "discovery_filter": "match_any", 00:05:22.228 "admin_cmd_passthru": { 00:05:22.228 "identify_ctrlr": false 00:05:22.228 } 00:05:22.228 } 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "method": "nvmf_set_max_subsystems", 00:05:22.228 "params": { 00:05:22.228 "max_subsystems": 1024 00:05:22.228 } 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "method": "nvmf_set_crdt", 00:05:22.228 "params": { 00:05:22.228 "crdt1": 0, 00:05:22.228 "crdt2": 0, 00:05:22.228 "crdt3": 0 00:05:22.228 } 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "method": "nvmf_create_transport", 00:05:22.228 "params": { 00:05:22.228 "trtype": "TCP", 00:05:22.228 "max_queue_depth": 128, 00:05:22.228 "max_io_qpairs_per_ctrlr": 127, 00:05:22.228 "in_capsule_data_size": 4096, 00:05:22.228 "max_io_size": 131072, 00:05:22.228 "io_unit_size": 131072, 00:05:22.228 "max_aq_depth": 128, 00:05:22.228 "num_shared_buffers": 511, 00:05:22.228 "buf_cache_size": 4294967295, 00:05:22.228 "dif_insert_or_strip": false, 00:05:22.228 "zcopy": false, 00:05:22.228 "c2h_success": true, 00:05:22.228 "sock_priority": 0, 00:05:22.228 "abort_timeout_sec": 1, 00:05:22.228 "ack_timeout": 0, 00:05:22.228 "data_wr_pool_size": 0 00:05:22.228 } 00:05:22.228 } 00:05:22.228 ] 00:05:22.228 }, 00:05:22.228 { 00:05:22.228 "subsystem": "iscsi", 00:05:22.228 "config": [ 00:05:22.228 { 00:05:22.228 "method": "iscsi_set_options", 00:05:22.228 "params": { 00:05:22.228 "node_base": "iqn.2016-06.io.spdk", 00:05:22.228 "max_sessions": 128, 00:05:22.228 "max_connections_per_session": 2, 00:05:22.228 "max_queue_depth": 64, 00:05:22.228 "default_time2wait": 2, 00:05:22.228 "default_time2retain": 20, 00:05:22.228 "first_burst_length": 8192, 00:05:22.228 "immediate_data": true, 00:05:22.228 "allow_duplicated_isid": false, 00:05:22.228 "error_recovery_level": 0, 00:05:22.228 "nop_timeout": 60, 00:05:22.228 "nop_in_interval": 30, 00:05:22.228 "disable_chap": false, 00:05:22.228 "require_chap": false, 00:05:22.228 "mutual_chap": false, 00:05:22.228 "chap_group": 0, 00:05:22.228 "max_large_datain_per_connection": 64, 00:05:22.228 "max_r2t_per_connection": 4, 00:05:22.228 "pdu_pool_size": 36864, 00:05:22.228 "immediate_data_pool_size": 16384, 00:05:22.228 "data_out_pool_size": 2048 00:05:22.228 } 00:05:22.228 } 00:05:22.228 ] 00:05:22.228 } 00:05:22.228 ] 00:05:22.228 } 00:05:22.228 17:40:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:22.228 17:40:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 814773 00:05:22.228 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 814773 ']' 00:05:22.228 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 814773 00:05:22.228 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:22.228 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:22.228 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 814773 00:05:22.228 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:22.228 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:22.228 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 814773' 00:05:22.228 killing process with pid 814773 00:05:22.228 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 814773 00:05:22.228 17:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 814773 00:05:22.794 17:40:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=814915 00:05:22.794 17:40:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:22.794 17:40:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:28.055 17:41:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 814915 00:05:28.055 17:41:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 814915 ']' 00:05:28.055 17:41:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 814915 00:05:28.055 17:41:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:28.055 17:41:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:28.055 17:41:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 814915 00:05:28.055 17:41:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:28.055 17:41:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:28.056 17:41:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 814915' 00:05:28.056 killing process with pid 814915 00:05:28.056 17:41:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 814915 00:05:28.056 17:41:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 814915 00:05:28.056 17:41:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:28.056 17:41:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:28.056 00:05:28.056 real 0m6.499s 00:05:28.056 user 0m6.089s 00:05:28.056 sys 0m0.702s 00:05:28.056 17:41:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.056 17:41:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.056 ************************************ 00:05:28.056 END TEST skip_rpc_with_json 00:05:28.056 ************************************ 00:05:28.056 17:41:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:28.056 17:41:02 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.056 17:41:02 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.056 17:41:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.313 ************************************ 00:05:28.313 START TEST skip_rpc_with_delay 00:05:28.313 ************************************ 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.313 [2024-07-20 17:41:02.930080] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:28.313 [2024-07-20 17:41:02.930200] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:28.313 00:05:28.313 real 0m0.069s 00:05:28.313 user 0m0.046s 00:05:28.313 sys 0m0.022s 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.313 17:41:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:28.313 ************************************ 00:05:28.313 END TEST skip_rpc_with_delay 00:05:28.313 ************************************ 00:05:28.313 17:41:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:28.313 17:41:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:28.313 17:41:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:28.313 17:41:02 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.313 17:41:02 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.313 17:41:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.313 ************************************ 00:05:28.313 START TEST exit_on_failed_rpc_init 00:05:28.313 ************************************ 00:05:28.313 17:41:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:28.313 17:41:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=815627 00:05:28.313 17:41:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.313 17:41:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 815627 00:05:28.313 17:41:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 815627 ']' 00:05:28.313 17:41:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.313 17:41:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:28.313 17:41:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.313 17:41:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:28.313 17:41:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.313 [2024-07-20 17:41:03.045415] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:28.313 [2024-07-20 17:41:03.045517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815627 ] 00:05:28.313 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.313 [2024-07-20 17:41:03.108243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.570 [2024-07-20 17:41:03.197327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.826 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:28.826 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:28.826 17:41:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.826 17:41:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.826 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:28.826 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.826 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.826 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.826 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.826 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.826 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.826 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.826 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.827 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:28.827 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.827 [2024-07-20 17:41:03.506056] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:28.827 [2024-07-20 17:41:03.506150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815643 ] 00:05:28.827 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.827 [2024-07-20 17:41:03.566557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.083 [2024-07-20 17:41:03.661451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.083 [2024-07-20 17:41:03.661574] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:29.083 [2024-07-20 17:41:03.661596] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:29.083 [2024-07-20 17:41:03.661609] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 815627 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 815627 ']' 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 815627 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 815627 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 815627' 00:05:29.083 killing process with pid 815627 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 815627 00:05:29.083 17:41:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 815627 00:05:29.652 00:05:29.652 real 0m1.200s 00:05:29.652 user 0m1.302s 00:05:29.652 sys 0m0.446s 00:05:29.652 17:41:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.652 17:41:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.652 ************************************ 00:05:29.652 END TEST exit_on_failed_rpc_init 00:05:29.652 ************************************ 00:05:29.652 17:41:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:29.652 00:05:29.652 real 0m13.460s 00:05:29.652 user 0m12.670s 00:05:29.652 sys 0m1.653s 00:05:29.652 17:41:04 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.652 17:41:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.652 ************************************ 00:05:29.652 END TEST skip_rpc 00:05:29.652 ************************************ 00:05:29.652 17:41:04 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:29.652 17:41:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.652 17:41:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.652 17:41:04 -- common/autotest_common.sh@10 -- # set +x 00:05:29.652 ************************************ 00:05:29.652 START TEST rpc_client 00:05:29.652 ************************************ 00:05:29.652 17:41:04 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:29.652 * Looking for test storage... 00:05:29.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:29.652 17:41:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:29.652 OK 00:05:29.652 17:41:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:29.652 00:05:29.652 real 0m0.069s 00:05:29.652 user 0m0.037s 00:05:29.652 sys 0m0.037s 00:05:29.652 17:41:04 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.652 17:41:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:29.652 ************************************ 00:05:29.652 END TEST rpc_client 00:05:29.652 ************************************ 00:05:29.652 17:41:04 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:29.652 17:41:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.652 17:41:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.652 17:41:04 -- common/autotest_common.sh@10 -- # set +x 00:05:29.652 ************************************ 00:05:29.652 START TEST json_config 00:05:29.652 ************************************ 00:05:29.652 17:41:04 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:29.652 17:41:04 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:29.652 17:41:04 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.652 17:41:04 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.652 17:41:04 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.652 17:41:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.652 17:41:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.652 17:41:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.652 17:41:04 json_config -- paths/export.sh@5 -- # export PATH 00:05:29.652 17:41:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@47 -- # : 0 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:29.652 17:41:04 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:29.652 17:41:04 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:29.652 17:41:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:29.652 17:41:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:29.652 17:41:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:29.652 17:41:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:29.652 17:41:04 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:29.652 17:41:04 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:29.653 17:41:04 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:29.653 17:41:04 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:29.653 17:41:04 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:29.653 17:41:04 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:29.653 17:41:04 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:29.653 17:41:04 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:29.653 17:41:04 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:29.653 17:41:04 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:29.653 17:41:04 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:29.653 INFO: JSON configuration test init 00:05:29.653 17:41:04 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:29.653 17:41:04 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:29.653 17:41:04 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:29.653 17:41:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.653 17:41:04 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:29.653 17:41:04 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:29.653 17:41:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.653 17:41:04 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:29.653 17:41:04 json_config -- json_config/common.sh@9 -- # local app=target 00:05:29.653 17:41:04 json_config -- json_config/common.sh@10 -- # shift 00:05:29.653 17:41:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:29.653 17:41:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:29.653 17:41:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:29.653 17:41:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.653 17:41:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.653 17:41:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=815884 00:05:29.653 17:41:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:29.653 17:41:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:29.653 Waiting for target to run... 00:05:29.653 17:41:04 json_config -- json_config/common.sh@25 -- # waitforlisten 815884 /var/tmp/spdk_tgt.sock 00:05:29.653 17:41:04 json_config -- common/autotest_common.sh@827 -- # '[' -z 815884 ']' 00:05:29.653 17:41:04 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.653 17:41:04 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.653 17:41:04 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.653 17:41:04 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.653 17:41:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.930 [2024-07-20 17:41:04.488268] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:29.930 [2024-07-20 17:41:04.488358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815884 ] 00:05:29.930 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.497 [2024-07-20 17:41:04.992773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.497 [2024-07-20 17:41:05.074523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.754 17:41:05 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.754 17:41:05 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:30.754 17:41:05 json_config -- json_config/common.sh@26 -- # echo '' 00:05:30.754 00:05:30.754 17:41:05 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:30.754 17:41:05 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:30.754 17:41:05 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:30.754 17:41:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.754 17:41:05 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:30.754 17:41:05 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:30.754 17:41:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.754 17:41:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.754 17:41:05 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:30.754 17:41:05 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:30.754 17:41:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:34.030 17:41:08 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:34.030 17:41:08 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:34.030 17:41:08 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:34.030 17:41:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.030 17:41:08 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:34.030 17:41:08 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:34.030 17:41:08 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:34.030 17:41:08 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:34.030 17:41:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:34.030 17:41:08 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:34.288 17:41:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.288 17:41:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:34.288 17:41:08 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:34.288 17:41:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:34.288 17:41:08 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.288 17:41:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.545 MallocForNvmf0 00:05:34.545 17:41:09 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:34.545 17:41:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:34.802 MallocForNvmf1 00:05:34.802 17:41:09 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:34.802 17:41:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:34.802 [2024-07-20 17:41:09.585336] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:35.059 17:41:09 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:35.059 17:41:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:35.059 17:41:09 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:35.059 17:41:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:35.317 17:41:10 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:35.317 17:41:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:35.575 17:41:10 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.575 17:41:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.833 [2024-07-20 17:41:10.568546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:35.833 17:41:10 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:35.833 17:41:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.833 17:41:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.833 17:41:10 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:35.833 17:41:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.833 17:41:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.833 17:41:10 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:35.833 17:41:10 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.834 17:41:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:36.091 MallocBdevForConfigChangeCheck 00:05:36.091 17:41:10 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:36.091 17:41:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.091 17:41:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.350 17:41:10 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:36.350 17:41:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.608 17:41:11 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:36.608 INFO: shutting down applications... 00:05:36.608 17:41:11 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:36.608 17:41:11 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:36.608 17:41:11 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:36.608 17:41:11 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:38.503 Calling clear_iscsi_subsystem 00:05:38.503 Calling clear_nvmf_subsystem 00:05:38.503 Calling clear_nbd_subsystem 00:05:38.503 Calling clear_ublk_subsystem 00:05:38.503 Calling clear_vhost_blk_subsystem 00:05:38.503 Calling clear_vhost_scsi_subsystem 00:05:38.503 Calling clear_bdev_subsystem 00:05:38.503 17:41:12 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:38.503 17:41:12 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:38.503 17:41:12 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:38.503 17:41:12 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.503 17:41:12 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:38.503 17:41:12 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:38.503 17:41:13 json_config -- json_config/json_config.sh@345 -- # break 00:05:38.503 17:41:13 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:38.503 17:41:13 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:38.503 17:41:13 json_config -- json_config/common.sh@31 -- # local app=target 00:05:38.503 17:41:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:38.762 17:41:13 json_config -- json_config/common.sh@35 -- # [[ -n 815884 ]] 00:05:38.762 17:41:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 815884 00:05:38.762 17:41:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:38.762 17:41:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.762 17:41:13 json_config -- json_config/common.sh@41 -- # kill -0 815884 00:05:38.762 17:41:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.018 17:41:13 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.018 17:41:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.018 17:41:13 json_config -- json_config/common.sh@41 -- # kill -0 815884 00:05:39.018 17:41:13 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.018 17:41:13 json_config -- json_config/common.sh@43 -- # break 00:05:39.018 17:41:13 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.018 17:41:13 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.018 SPDK target shutdown done 00:05:39.018 17:41:13 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:39.018 INFO: relaunching applications... 00:05:39.018 17:41:13 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.018 17:41:13 json_config -- json_config/common.sh@9 -- # local app=target 00:05:39.018 17:41:13 json_config -- json_config/common.sh@10 -- # shift 00:05:39.018 17:41:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.018 17:41:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.018 17:41:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.018 17:41:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.018 17:41:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.018 17:41:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=817096 00:05:39.018 17:41:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.018 17:41:13 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.018 Waiting for target to run... 00:05:39.018 17:41:13 json_config -- json_config/common.sh@25 -- # waitforlisten 817096 /var/tmp/spdk_tgt.sock 00:05:39.018 17:41:13 json_config -- common/autotest_common.sh@827 -- # '[' -z 817096 ']' 00:05:39.018 17:41:13 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.018 17:41:13 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:39.018 17:41:13 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.018 17:41:13 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:39.018 17:41:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.276 [2024-07-20 17:41:13.857886] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:39.276 [2024-07-20 17:41:13.857969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid817096 ] 00:05:39.276 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.839 [2024-07-20 17:41:14.364701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.839 [2024-07-20 17:41:14.446864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.118 [2024-07-20 17:41:17.474815] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:43.118 [2024-07-20 17:41:17.507271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:43.684 17:41:18 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:43.684 17:41:18 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:43.684 17:41:18 json_config -- json_config/common.sh@26 -- # echo '' 00:05:43.684 00:05:43.684 17:41:18 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:43.684 17:41:18 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:43.684 INFO: Checking if target configuration is the same... 00:05:43.684 17:41:18 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.684 17:41:18 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:43.684 17:41:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.684 + '[' 2 -ne 2 ']' 00:05:43.684 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:43.684 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:43.684 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:43.684 +++ basename /dev/fd/62 00:05:43.684 ++ mktemp /tmp/62.XXX 00:05:43.684 + tmp_file_1=/tmp/62.54K 00:05:43.684 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.684 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:43.684 + tmp_file_2=/tmp/spdk_tgt_config.json.E6g 00:05:43.684 + ret=0 00:05:43.684 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.942 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.942 + diff -u /tmp/62.54K /tmp/spdk_tgt_config.json.E6g 00:05:43.942 + echo 'INFO: JSON config files are the same' 00:05:43.942 INFO: JSON config files are the same 00:05:43.942 + rm /tmp/62.54K /tmp/spdk_tgt_config.json.E6g 00:05:43.942 + exit 0 00:05:43.942 17:41:18 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:43.942 17:41:18 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:43.942 INFO: changing configuration and checking if this can be detected... 00:05:43.942 17:41:18 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:43.942 17:41:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:44.201 17:41:18 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.201 17:41:18 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:44.201 17:41:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.201 + '[' 2 -ne 2 ']' 00:05:44.201 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:44.201 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:44.201 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:44.201 +++ basename /dev/fd/62 00:05:44.201 ++ mktemp /tmp/62.XXX 00:05:44.201 + tmp_file_1=/tmp/62.UQ0 00:05:44.201 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.201 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:44.201 + tmp_file_2=/tmp/spdk_tgt_config.json.jHo 00:05:44.201 + ret=0 00:05:44.201 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:44.768 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:44.768 + diff -u /tmp/62.UQ0 /tmp/spdk_tgt_config.json.jHo 00:05:44.768 + ret=1 00:05:44.768 + echo '=== Start of file: /tmp/62.UQ0 ===' 00:05:44.768 + cat /tmp/62.UQ0 00:05:44.768 + echo '=== End of file: /tmp/62.UQ0 ===' 00:05:44.768 + echo '' 00:05:44.768 + echo '=== Start of file: /tmp/spdk_tgt_config.json.jHo ===' 00:05:44.768 + cat /tmp/spdk_tgt_config.json.jHo 00:05:44.768 + echo '=== End of file: /tmp/spdk_tgt_config.json.jHo ===' 00:05:44.768 + echo '' 00:05:44.768 + rm /tmp/62.UQ0 /tmp/spdk_tgt_config.json.jHo 00:05:44.768 + exit 1 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:44.768 INFO: configuration change detected. 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@317 -- # [[ -n 817096 ]] 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.768 17:41:19 json_config -- json_config/json_config.sh@323 -- # killprocess 817096 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@946 -- # '[' -z 817096 ']' 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@950 -- # kill -0 817096 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@951 -- # uname 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 817096 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 817096' 00:05:44.768 killing process with pid 817096 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@965 -- # kill 817096 00:05:44.768 17:41:19 json_config -- common/autotest_common.sh@970 -- # wait 817096 00:05:46.682 17:41:21 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.682 17:41:21 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:46.682 17:41:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.682 17:41:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.682 17:41:21 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:46.682 17:41:21 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:46.682 INFO: Success 00:05:46.682 00:05:46.682 real 0m16.709s 00:05:46.682 user 0m18.503s 00:05:46.682 sys 0m2.142s 00:05:46.682 17:41:21 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.682 17:41:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.682 ************************************ 00:05:46.682 END TEST json_config 00:05:46.682 ************************************ 00:05:46.682 17:41:21 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.682 17:41:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.682 17:41:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.682 17:41:21 -- common/autotest_common.sh@10 -- # set +x 00:05:46.682 ************************************ 00:05:46.682 START TEST json_config_extra_key 00:05:46.682 ************************************ 00:05:46.682 17:41:21 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.682 17:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.682 17:41:21 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.682 17:41:21 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.682 17:41:21 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.682 17:41:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.682 17:41:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.682 17:41:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.682 17:41:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:46.682 17:41:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.682 17:41:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.683 17:41:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.683 17:41:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:46.683 17:41:21 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:46.683 17:41:21 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:46.683 17:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:46.683 17:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:46.683 17:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:46.683 17:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:46.683 17:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:46.683 17:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:46.683 17:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:46.683 17:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:46.683 17:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:46.683 17:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.683 17:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:46.683 INFO: launching applications... 00:05:46.683 17:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.683 17:41:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:46.683 17:41:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:46.683 17:41:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.683 17:41:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.683 17:41:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.683 17:41:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.683 17:41:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.683 17:41:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=818116 00:05:46.683 17:41:21 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.683 17:41:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.683 Waiting for target to run... 00:05:46.683 17:41:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 818116 /var/tmp/spdk_tgt.sock 00:05:46.683 17:41:21 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 818116 ']' 00:05:46.683 17:41:21 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.683 17:41:21 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:46.683 17:41:21 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.683 17:41:21 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:46.683 17:41:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:46.683 [2024-07-20 17:41:21.240736] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:46.683 [2024-07-20 17:41:21.240842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818116 ] 00:05:46.683 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.994 [2024-07-20 17:41:21.744917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.251 [2024-07-20 17:41:21.824384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.508 17:41:22 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:47.508 17:41:22 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:47.508 17:41:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:47.508 00:05:47.508 17:41:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:47.508 INFO: shutting down applications... 00:05:47.508 17:41:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:47.508 17:41:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:47.508 17:41:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:47.508 17:41:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 818116 ]] 00:05:47.508 17:41:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 818116 00:05:47.508 17:41:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:47.508 17:41:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.508 17:41:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 818116 00:05:47.508 17:41:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.072 17:41:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.072 17:41:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.072 17:41:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 818116 00:05:48.072 17:41:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:48.072 17:41:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:48.072 17:41:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:48.072 17:41:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:48.072 SPDK target shutdown done 00:05:48.072 17:41:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:48.072 Success 00:05:48.072 00:05:48.072 real 0m1.597s 00:05:48.072 user 0m1.431s 00:05:48.072 sys 0m0.606s 00:05:48.073 17:41:22 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.073 17:41:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:48.073 ************************************ 00:05:48.073 END TEST json_config_extra_key 00:05:48.073 ************************************ 00:05:48.073 17:41:22 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.073 17:41:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.073 17:41:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.073 17:41:22 -- common/autotest_common.sh@10 -- # set +x 00:05:48.073 ************************************ 00:05:48.073 START TEST alias_rpc 00:05:48.073 ************************************ 00:05:48.073 17:41:22 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.073 * Looking for test storage... 00:05:48.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:48.073 17:41:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:48.073 17:41:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=818331 00:05:48.073 17:41:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.073 17:41:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 818331 00:05:48.073 17:41:22 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 818331 ']' 00:05:48.073 17:41:22 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.073 17:41:22 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:48.073 17:41:22 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.073 17:41:22 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:48.073 17:41:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.330 [2024-07-20 17:41:22.884633] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:48.330 [2024-07-20 17:41:22.884744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818331 ] 00:05:48.330 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.330 [2024-07-20 17:41:22.942763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.331 [2024-07-20 17:41:23.026727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.587 17:41:23 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.587 17:41:23 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:48.587 17:41:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:48.843 17:41:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 818331 00:05:48.843 17:41:23 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 818331 ']' 00:05:48.843 17:41:23 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 818331 00:05:48.843 17:41:23 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:48.843 17:41:23 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:48.843 17:41:23 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 818331 00:05:48.843 17:41:23 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:48.843 17:41:23 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:48.843 17:41:23 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 818331' 00:05:48.843 killing process with pid 818331 00:05:48.843 17:41:23 alias_rpc -- common/autotest_common.sh@965 -- # kill 818331 00:05:48.843 17:41:23 alias_rpc -- common/autotest_common.sh@970 -- # wait 818331 00:05:49.406 00:05:49.406 real 0m1.214s 00:05:49.406 user 0m1.270s 00:05:49.406 sys 0m0.433s 00:05:49.406 17:41:23 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.406 17:41:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.406 ************************************ 00:05:49.406 END TEST alias_rpc 00:05:49.406 ************************************ 00:05:49.406 17:41:24 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:49.406 17:41:24 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:49.406 17:41:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.406 17:41:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.406 17:41:24 -- common/autotest_common.sh@10 -- # set +x 00:05:49.406 ************************************ 00:05:49.406 START TEST spdkcli_tcp 00:05:49.406 ************************************ 00:05:49.406 17:41:24 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:49.406 * Looking for test storage... 00:05:49.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:49.406 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:49.406 17:41:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:49.406 17:41:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:49.406 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:49.406 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:49.406 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:49.406 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:49.406 17:41:24 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.406 17:41:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.406 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=818608 00:05:49.406 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:49.406 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 818608 00:05:49.406 17:41:24 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 818608 ']' 00:05:49.406 17:41:24 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.406 17:41:24 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:49.407 17:41:24 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.407 17:41:24 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:49.407 17:41:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.407 [2024-07-20 17:41:24.145434] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:49.407 [2024-07-20 17:41:24.145526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818608 ] 00:05:49.407 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.663 [2024-07-20 17:41:24.204214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.663 [2024-07-20 17:41:24.292729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.663 [2024-07-20 17:41:24.292733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.921 17:41:24 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:49.921 17:41:24 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:49.921 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=818628 00:05:49.921 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:49.921 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:50.178 [ 00:05:50.178 "bdev_malloc_delete", 00:05:50.178 "bdev_malloc_create", 00:05:50.178 "bdev_null_resize", 00:05:50.178 "bdev_null_delete", 00:05:50.178 "bdev_null_create", 00:05:50.178 "bdev_nvme_cuse_unregister", 00:05:50.178 "bdev_nvme_cuse_register", 00:05:50.178 "bdev_opal_new_user", 00:05:50.178 "bdev_opal_set_lock_state", 00:05:50.178 "bdev_opal_delete", 00:05:50.178 "bdev_opal_get_info", 00:05:50.178 "bdev_opal_create", 00:05:50.178 "bdev_nvme_opal_revert", 00:05:50.178 "bdev_nvme_opal_init", 00:05:50.178 "bdev_nvme_send_cmd", 00:05:50.178 "bdev_nvme_get_path_iostat", 00:05:50.178 "bdev_nvme_get_mdns_discovery_info", 00:05:50.178 "bdev_nvme_stop_mdns_discovery", 00:05:50.178 "bdev_nvme_start_mdns_discovery", 00:05:50.178 "bdev_nvme_set_multipath_policy", 00:05:50.178 "bdev_nvme_set_preferred_path", 00:05:50.178 "bdev_nvme_get_io_paths", 00:05:50.178 "bdev_nvme_remove_error_injection", 00:05:50.178 "bdev_nvme_add_error_injection", 00:05:50.178 "bdev_nvme_get_discovery_info", 00:05:50.178 "bdev_nvme_stop_discovery", 00:05:50.178 "bdev_nvme_start_discovery", 00:05:50.178 "bdev_nvme_get_controller_health_info", 00:05:50.178 "bdev_nvme_disable_controller", 00:05:50.178 "bdev_nvme_enable_controller", 00:05:50.178 "bdev_nvme_reset_controller", 00:05:50.178 "bdev_nvme_get_transport_statistics", 00:05:50.178 "bdev_nvme_apply_firmware", 00:05:50.178 "bdev_nvme_detach_controller", 00:05:50.178 "bdev_nvme_get_controllers", 00:05:50.178 "bdev_nvme_attach_controller", 00:05:50.178 "bdev_nvme_set_hotplug", 00:05:50.178 "bdev_nvme_set_options", 00:05:50.178 "bdev_passthru_delete", 00:05:50.178 "bdev_passthru_create", 00:05:50.178 "bdev_lvol_set_parent_bdev", 00:05:50.178 "bdev_lvol_set_parent", 00:05:50.178 "bdev_lvol_check_shallow_copy", 00:05:50.178 "bdev_lvol_start_shallow_copy", 00:05:50.178 "bdev_lvol_grow_lvstore", 00:05:50.178 "bdev_lvol_get_lvols", 00:05:50.178 "bdev_lvol_get_lvstores", 00:05:50.178 "bdev_lvol_delete", 00:05:50.178 "bdev_lvol_set_read_only", 00:05:50.178 "bdev_lvol_resize", 00:05:50.178 "bdev_lvol_decouple_parent", 00:05:50.178 "bdev_lvol_inflate", 00:05:50.178 "bdev_lvol_rename", 00:05:50.178 "bdev_lvol_clone_bdev", 00:05:50.178 "bdev_lvol_clone", 00:05:50.178 "bdev_lvol_snapshot", 00:05:50.178 "bdev_lvol_create", 00:05:50.178 "bdev_lvol_delete_lvstore", 00:05:50.178 "bdev_lvol_rename_lvstore", 00:05:50.178 "bdev_lvol_create_lvstore", 00:05:50.178 "bdev_raid_set_options", 00:05:50.178 "bdev_raid_remove_base_bdev", 00:05:50.178 "bdev_raid_add_base_bdev", 00:05:50.178 "bdev_raid_delete", 00:05:50.178 "bdev_raid_create", 00:05:50.178 "bdev_raid_get_bdevs", 00:05:50.178 "bdev_error_inject_error", 00:05:50.178 "bdev_error_delete", 00:05:50.178 "bdev_error_create", 00:05:50.178 "bdev_split_delete", 00:05:50.178 "bdev_split_create", 00:05:50.178 "bdev_delay_delete", 00:05:50.178 "bdev_delay_create", 00:05:50.178 "bdev_delay_update_latency", 00:05:50.178 "bdev_zone_block_delete", 00:05:50.178 "bdev_zone_block_create", 00:05:50.178 "blobfs_create", 00:05:50.178 "blobfs_detect", 00:05:50.178 "blobfs_set_cache_size", 00:05:50.178 "bdev_aio_delete", 00:05:50.178 "bdev_aio_rescan", 00:05:50.178 "bdev_aio_create", 00:05:50.178 "bdev_ftl_set_property", 00:05:50.178 "bdev_ftl_get_properties", 00:05:50.178 "bdev_ftl_get_stats", 00:05:50.178 "bdev_ftl_unmap", 00:05:50.178 "bdev_ftl_unload", 00:05:50.178 "bdev_ftl_delete", 00:05:50.178 "bdev_ftl_load", 00:05:50.178 "bdev_ftl_create", 00:05:50.178 "bdev_virtio_attach_controller", 00:05:50.178 "bdev_virtio_scsi_get_devices", 00:05:50.178 "bdev_virtio_detach_controller", 00:05:50.178 "bdev_virtio_blk_set_hotplug", 00:05:50.178 "bdev_iscsi_delete", 00:05:50.178 "bdev_iscsi_create", 00:05:50.178 "bdev_iscsi_set_options", 00:05:50.178 "accel_error_inject_error", 00:05:50.178 "ioat_scan_accel_module", 00:05:50.178 "dsa_scan_accel_module", 00:05:50.178 "iaa_scan_accel_module", 00:05:50.178 "vfu_virtio_create_scsi_endpoint", 00:05:50.178 "vfu_virtio_scsi_remove_target", 00:05:50.178 "vfu_virtio_scsi_add_target", 00:05:50.178 "vfu_virtio_create_blk_endpoint", 00:05:50.178 "vfu_virtio_delete_endpoint", 00:05:50.178 "keyring_file_remove_key", 00:05:50.178 "keyring_file_add_key", 00:05:50.178 "keyring_linux_set_options", 00:05:50.178 "iscsi_get_histogram", 00:05:50.178 "iscsi_enable_histogram", 00:05:50.178 "iscsi_set_options", 00:05:50.178 "iscsi_get_auth_groups", 00:05:50.178 "iscsi_auth_group_remove_secret", 00:05:50.178 "iscsi_auth_group_add_secret", 00:05:50.178 "iscsi_delete_auth_group", 00:05:50.178 "iscsi_create_auth_group", 00:05:50.178 "iscsi_set_discovery_auth", 00:05:50.178 "iscsi_get_options", 00:05:50.178 "iscsi_target_node_request_logout", 00:05:50.178 "iscsi_target_node_set_redirect", 00:05:50.178 "iscsi_target_node_set_auth", 00:05:50.178 "iscsi_target_node_add_lun", 00:05:50.178 "iscsi_get_stats", 00:05:50.178 "iscsi_get_connections", 00:05:50.178 "iscsi_portal_group_set_auth", 00:05:50.178 "iscsi_start_portal_group", 00:05:50.178 "iscsi_delete_portal_group", 00:05:50.178 "iscsi_create_portal_group", 00:05:50.178 "iscsi_get_portal_groups", 00:05:50.178 "iscsi_delete_target_node", 00:05:50.178 "iscsi_target_node_remove_pg_ig_maps", 00:05:50.178 "iscsi_target_node_add_pg_ig_maps", 00:05:50.178 "iscsi_create_target_node", 00:05:50.178 "iscsi_get_target_nodes", 00:05:50.178 "iscsi_delete_initiator_group", 00:05:50.178 "iscsi_initiator_group_remove_initiators", 00:05:50.178 "iscsi_initiator_group_add_initiators", 00:05:50.178 "iscsi_create_initiator_group", 00:05:50.178 "iscsi_get_initiator_groups", 00:05:50.178 "nvmf_set_crdt", 00:05:50.178 "nvmf_set_config", 00:05:50.178 "nvmf_set_max_subsystems", 00:05:50.178 "nvmf_stop_mdns_prr", 00:05:50.178 "nvmf_publish_mdns_prr", 00:05:50.178 "nvmf_subsystem_get_listeners", 00:05:50.178 "nvmf_subsystem_get_qpairs", 00:05:50.178 "nvmf_subsystem_get_controllers", 00:05:50.178 "nvmf_get_stats", 00:05:50.178 "nvmf_get_transports", 00:05:50.178 "nvmf_create_transport", 00:05:50.178 "nvmf_get_targets", 00:05:50.178 "nvmf_delete_target", 00:05:50.178 "nvmf_create_target", 00:05:50.178 "nvmf_subsystem_allow_any_host", 00:05:50.178 "nvmf_subsystem_remove_host", 00:05:50.178 "nvmf_subsystem_add_host", 00:05:50.178 "nvmf_ns_remove_host", 00:05:50.178 "nvmf_ns_add_host", 00:05:50.178 "nvmf_subsystem_remove_ns", 00:05:50.178 "nvmf_subsystem_add_ns", 00:05:50.178 "nvmf_subsystem_listener_set_ana_state", 00:05:50.178 "nvmf_discovery_get_referrals", 00:05:50.178 "nvmf_discovery_remove_referral", 00:05:50.178 "nvmf_discovery_add_referral", 00:05:50.178 "nvmf_subsystem_remove_listener", 00:05:50.178 "nvmf_subsystem_add_listener", 00:05:50.178 "nvmf_delete_subsystem", 00:05:50.178 "nvmf_create_subsystem", 00:05:50.178 "nvmf_get_subsystems", 00:05:50.178 "env_dpdk_get_mem_stats", 00:05:50.178 "nbd_get_disks", 00:05:50.178 "nbd_stop_disk", 00:05:50.178 "nbd_start_disk", 00:05:50.178 "ublk_recover_disk", 00:05:50.178 "ublk_get_disks", 00:05:50.178 "ublk_stop_disk", 00:05:50.178 "ublk_start_disk", 00:05:50.178 "ublk_destroy_target", 00:05:50.178 "ublk_create_target", 00:05:50.178 "virtio_blk_create_transport", 00:05:50.178 "virtio_blk_get_transports", 00:05:50.178 "vhost_controller_set_coalescing", 00:05:50.178 "vhost_get_controllers", 00:05:50.178 "vhost_delete_controller", 00:05:50.178 "vhost_create_blk_controller", 00:05:50.178 "vhost_scsi_controller_remove_target", 00:05:50.178 "vhost_scsi_controller_add_target", 00:05:50.178 "vhost_start_scsi_controller", 00:05:50.178 "vhost_create_scsi_controller", 00:05:50.178 "thread_set_cpumask", 00:05:50.178 "framework_get_scheduler", 00:05:50.178 "framework_set_scheduler", 00:05:50.178 "framework_get_reactors", 00:05:50.178 "thread_get_io_channels", 00:05:50.178 "thread_get_pollers", 00:05:50.178 "thread_get_stats", 00:05:50.178 "framework_monitor_context_switch", 00:05:50.178 "spdk_kill_instance", 00:05:50.178 "log_enable_timestamps", 00:05:50.178 "log_get_flags", 00:05:50.178 "log_clear_flag", 00:05:50.178 "log_set_flag", 00:05:50.178 "log_get_level", 00:05:50.178 "log_set_level", 00:05:50.178 "log_get_print_level", 00:05:50.178 "log_set_print_level", 00:05:50.178 "framework_enable_cpumask_locks", 00:05:50.178 "framework_disable_cpumask_locks", 00:05:50.178 "framework_wait_init", 00:05:50.178 "framework_start_init", 00:05:50.178 "scsi_get_devices", 00:05:50.178 "bdev_get_histogram", 00:05:50.178 "bdev_enable_histogram", 00:05:50.178 "bdev_set_qos_limit", 00:05:50.178 "bdev_set_qd_sampling_period", 00:05:50.178 "bdev_get_bdevs", 00:05:50.178 "bdev_reset_iostat", 00:05:50.178 "bdev_get_iostat", 00:05:50.178 "bdev_examine", 00:05:50.178 "bdev_wait_for_examine", 00:05:50.178 "bdev_set_options", 00:05:50.178 "notify_get_notifications", 00:05:50.178 "notify_get_types", 00:05:50.178 "accel_get_stats", 00:05:50.178 "accel_set_options", 00:05:50.178 "accel_set_driver", 00:05:50.179 "accel_crypto_key_destroy", 00:05:50.179 "accel_crypto_keys_get", 00:05:50.179 "accel_crypto_key_create", 00:05:50.179 "accel_assign_opc", 00:05:50.179 "accel_get_module_info", 00:05:50.179 "accel_get_opc_assignments", 00:05:50.179 "vmd_rescan", 00:05:50.179 "vmd_remove_device", 00:05:50.179 "vmd_enable", 00:05:50.179 "sock_get_default_impl", 00:05:50.179 "sock_set_default_impl", 00:05:50.179 "sock_impl_set_options", 00:05:50.179 "sock_impl_get_options", 00:05:50.179 "iobuf_get_stats", 00:05:50.179 "iobuf_set_options", 00:05:50.179 "keyring_get_keys", 00:05:50.179 "framework_get_pci_devices", 00:05:50.179 "framework_get_config", 00:05:50.179 "framework_get_subsystems", 00:05:50.179 "vfu_tgt_set_base_path", 00:05:50.179 "trace_get_info", 00:05:50.179 "trace_get_tpoint_group_mask", 00:05:50.179 "trace_disable_tpoint_group", 00:05:50.179 "trace_enable_tpoint_group", 00:05:50.179 "trace_clear_tpoint_mask", 00:05:50.179 "trace_set_tpoint_mask", 00:05:50.179 "spdk_get_version", 00:05:50.179 "rpc_get_methods" 00:05:50.179 ] 00:05:50.179 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:50.179 17:41:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.179 17:41:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.179 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:50.179 17:41:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 818608 00:05:50.179 17:41:24 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 818608 ']' 00:05:50.179 17:41:24 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 818608 00:05:50.179 17:41:24 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:50.179 17:41:24 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:50.179 17:41:24 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 818608 00:05:50.179 17:41:24 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:50.179 17:41:24 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:50.179 17:41:24 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 818608' 00:05:50.179 killing process with pid 818608 00:05:50.179 17:41:24 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 818608 00:05:50.179 17:41:24 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 818608 00:05:50.744 00:05:50.744 real 0m1.194s 00:05:50.744 user 0m2.113s 00:05:50.744 sys 0m0.452s 00:05:50.744 17:41:25 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.744 17:41:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.744 ************************************ 00:05:50.744 END TEST spdkcli_tcp 00:05:50.744 ************************************ 00:05:50.744 17:41:25 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.744 17:41:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.744 17:41:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.744 17:41:25 -- common/autotest_common.sh@10 -- # set +x 00:05:50.744 ************************************ 00:05:50.744 START TEST dpdk_mem_utility 00:05:50.744 ************************************ 00:05:50.744 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.744 * Looking for test storage... 00:05:50.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:50.744 17:41:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:50.744 17:41:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=818808 00:05:50.744 17:41:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.744 17:41:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 818808 00:05:50.744 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 818808 ']' 00:05:50.744 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.744 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:50.744 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.744 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:50.744 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.744 [2024-07-20 17:41:25.383759] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:50.744 [2024-07-20 17:41:25.383880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818808 ] 00:05:50.744 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.744 [2024-07-20 17:41:25.443164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.744 [2024-07-20 17:41:25.527271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.001 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:51.001 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:51.001 17:41:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:51.001 17:41:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:51.001 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.001 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.001 { 00:05:51.001 "filename": "/tmp/spdk_mem_dump.txt" 00:05:51.001 } 00:05:51.001 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.001 17:41:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:51.260 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:51.260 1 heaps totaling size 814.000000 MiB 00:05:51.260 size: 814.000000 MiB heap id: 0 00:05:51.260 end heaps---------- 00:05:51.260 8 mempools totaling size 598.116089 MiB 00:05:51.260 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:51.260 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:51.260 size: 84.521057 MiB name: bdev_io_818808 00:05:51.260 size: 51.011292 MiB name: evtpool_818808 00:05:51.260 size: 50.003479 MiB name: msgpool_818808 00:05:51.260 size: 21.763794 MiB name: PDU_Pool 00:05:51.260 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:51.260 size: 0.026123 MiB name: Session_Pool 00:05:51.260 end mempools------- 00:05:51.260 6 memzones totaling size 4.142822 MiB 00:05:51.260 size: 1.000366 MiB name: RG_ring_0_818808 00:05:51.260 size: 1.000366 MiB name: RG_ring_1_818808 00:05:51.260 size: 1.000366 MiB name: RG_ring_4_818808 00:05:51.260 size: 1.000366 MiB name: RG_ring_5_818808 00:05:51.260 size: 0.125366 MiB name: RG_ring_2_818808 00:05:51.260 size: 0.015991 MiB name: RG_ring_3_818808 00:05:51.260 end memzones------- 00:05:51.260 17:41:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:51.260 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:51.260 list of free elements. size: 12.519348 MiB 00:05:51.260 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:51.260 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:51.260 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:51.260 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:51.260 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:51.260 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:51.260 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:51.260 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:51.260 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:51.260 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:51.260 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:51.260 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:51.260 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:51.260 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:51.260 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:51.260 list of standard malloc elements. size: 199.218079 MiB 00:05:51.260 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:51.260 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:51.260 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:51.260 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:51.260 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:51.260 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:51.260 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:51.260 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:51.260 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:51.260 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:51.260 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:51.260 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:51.260 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:51.260 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:51.260 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:51.260 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:51.260 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:51.260 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:51.260 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:51.260 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:51.260 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:51.260 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:51.260 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:51.260 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:51.260 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:51.260 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:51.260 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:51.260 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:51.260 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:51.260 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:51.260 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:51.260 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:51.260 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:51.260 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:51.260 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:51.260 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:51.260 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:51.260 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:51.260 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:51.260 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:51.260 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:51.260 list of memzone associated elements. size: 602.262573 MiB 00:05:51.260 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:51.260 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:51.260 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:51.260 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:51.260 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:51.260 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_818808_0 00:05:51.260 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:51.260 associated memzone info: size: 48.002930 MiB name: MP_evtpool_818808_0 00:05:51.260 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:51.260 associated memzone info: size: 48.002930 MiB name: MP_msgpool_818808_0 00:05:51.260 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:51.260 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:51.260 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:51.260 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:51.260 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:51.260 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_818808 00:05:51.260 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:51.260 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_818808 00:05:51.260 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:51.260 associated memzone info: size: 1.007996 MiB name: MP_evtpool_818808 00:05:51.260 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:51.260 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:51.260 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:51.260 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:51.260 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:51.260 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:51.260 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:51.260 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:51.260 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:51.260 associated memzone info: size: 1.000366 MiB name: RG_ring_0_818808 00:05:51.260 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:51.260 associated memzone info: size: 1.000366 MiB name: RG_ring_1_818808 00:05:51.260 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:51.260 associated memzone info: size: 1.000366 MiB name: RG_ring_4_818808 00:05:51.260 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:51.260 associated memzone info: size: 1.000366 MiB name: RG_ring_5_818808 00:05:51.260 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:51.260 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_818808 00:05:51.260 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:51.260 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:51.260 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:51.260 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:51.260 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:51.260 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:51.260 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:51.260 associated memzone info: size: 0.125366 MiB name: RG_ring_2_818808 00:05:51.260 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:51.260 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:51.260 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:51.260 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:51.260 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:51.260 associated memzone info: size: 0.015991 MiB name: RG_ring_3_818808 00:05:51.260 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:51.260 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:51.260 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:51.260 associated memzone info: size: 0.000183 MiB name: MP_msgpool_818808 00:05:51.260 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:51.260 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_818808 00:05:51.260 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:51.260 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:51.260 17:41:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:51.261 17:41:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 818808 00:05:51.261 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 818808 ']' 00:05:51.261 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 818808 00:05:51.261 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:51.261 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:51.261 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 818808 00:05:51.261 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:51.261 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:51.261 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 818808' 00:05:51.261 killing process with pid 818808 00:05:51.261 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 818808 00:05:51.261 17:41:25 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 818808 00:05:51.826 00:05:51.826 real 0m1.053s 00:05:51.826 user 0m1.021s 00:05:51.826 sys 0m0.406s 00:05:51.826 17:41:26 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.826 17:41:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.826 ************************************ 00:05:51.826 END TEST dpdk_mem_utility 00:05:51.826 ************************************ 00:05:51.826 17:41:26 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:51.826 17:41:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.826 17:41:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.826 17:41:26 -- common/autotest_common.sh@10 -- # set +x 00:05:51.826 ************************************ 00:05:51.826 START TEST event 00:05:51.826 ************************************ 00:05:51.826 17:41:26 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:51.826 * Looking for test storage... 00:05:51.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:51.826 17:41:26 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:51.826 17:41:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:51.826 17:41:26 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.826 17:41:26 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:51.826 17:41:26 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.826 17:41:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.826 ************************************ 00:05:51.826 START TEST event_perf 00:05:51.826 ************************************ 00:05:51.826 17:41:26 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.826 Running I/O for 1 seconds...[2024-07-20 17:41:26.474817] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:51.826 [2024-07-20 17:41:26.474877] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid819004 ] 00:05:51.826 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.826 [2024-07-20 17:41:26.537985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.084 [2024-07-20 17:41:26.629964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.084 [2024-07-20 17:41:26.630019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.084 [2024-07-20 17:41:26.630144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.084 [2024-07-20 17:41:26.630147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.018 Running I/O for 1 seconds... 00:05:53.018 lcore 0: 229236 00:05:53.018 lcore 1: 229236 00:05:53.018 lcore 2: 229236 00:05:53.018 lcore 3: 229235 00:05:53.018 done. 00:05:53.018 00:05:53.018 real 0m1.244s 00:05:53.018 user 0m4.149s 00:05:53.018 sys 0m0.088s 00:05:53.018 17:41:27 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.018 17:41:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.018 ************************************ 00:05:53.018 END TEST event_perf 00:05:53.018 ************************************ 00:05:53.018 17:41:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:53.018 17:41:27 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:53.018 17:41:27 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.018 17:41:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.018 ************************************ 00:05:53.018 START TEST event_reactor 00:05:53.018 ************************************ 00:05:53.018 17:41:27 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:53.018 [2024-07-20 17:41:27.765380] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:53.018 [2024-07-20 17:41:27.765444] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid819164 ] 00:05:53.018 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.276 [2024-07-20 17:41:27.827476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.276 [2024-07-20 17:41:27.918962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.206 test_start 00:05:54.206 oneshot 00:05:54.206 tick 100 00:05:54.206 tick 100 00:05:54.206 tick 250 00:05:54.206 tick 100 00:05:54.206 tick 100 00:05:54.206 tick 100 00:05:54.206 tick 250 00:05:54.206 tick 500 00:05:54.206 tick 100 00:05:54.206 tick 100 00:05:54.206 tick 250 00:05:54.206 tick 100 00:05:54.206 tick 100 00:05:54.206 test_end 00:05:54.206 00:05:54.206 real 0m1.244s 00:05:54.206 user 0m1.156s 00:05:54.206 sys 0m0.084s 00:05:54.206 17:41:28 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.206 17:41:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:54.206 ************************************ 00:05:54.206 END TEST event_reactor 00:05:54.206 ************************************ 00:05:54.463 17:41:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:54.463 17:41:29 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:54.463 17:41:29 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.463 17:41:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.463 ************************************ 00:05:54.463 START TEST event_reactor_perf 00:05:54.463 ************************************ 00:05:54.463 17:41:29 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:54.463 [2024-07-20 17:41:29.054221] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:54.463 [2024-07-20 17:41:29.054282] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid819320 ] 00:05:54.463 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.463 [2024-07-20 17:41:29.118351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.463 [2024-07-20 17:41:29.209691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.834 test_start 00:05:55.834 test_end 00:05:55.834 Performance: 348500 events per second 00:05:55.834 00:05:55.834 real 0m1.250s 00:05:55.834 user 0m1.159s 00:05:55.834 sys 0m0.087s 00:05:55.834 17:41:30 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.834 17:41:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.834 ************************************ 00:05:55.834 END TEST event_reactor_perf 00:05:55.834 ************************************ 00:05:55.834 17:41:30 event -- event/event.sh@49 -- # uname -s 00:05:55.834 17:41:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:55.834 17:41:30 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:55.834 17:41:30 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.834 17:41:30 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.834 17:41:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.834 ************************************ 00:05:55.834 START TEST event_scheduler 00:05:55.834 ************************************ 00:05:55.834 17:41:30 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:55.834 * Looking for test storage... 00:05:55.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:55.834 17:41:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:55.834 17:41:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=819501 00:05:55.834 17:41:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:55.834 17:41:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.834 17:41:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 819501 00:05:55.834 17:41:30 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 819501 ']' 00:05:55.834 17:41:30 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.834 17:41:30 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:55.834 17:41:30 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.834 17:41:30 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:55.834 17:41:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.834 [2024-07-20 17:41:30.437261] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:55.834 [2024-07-20 17:41:30.437344] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid819501 ] 00:05:55.834 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.834 [2024-07-20 17:41:30.496159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.834 [2024-07-20 17:41:30.583888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.834 [2024-07-20 17:41:30.583947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.834 [2024-07-20 17:41:30.584013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.834 [2024-07-20 17:41:30.584016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.092 17:41:30 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:56.092 17:41:30 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:56.092 17:41:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:56.092 17:41:30 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.092 17:41:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.092 POWER: Env isn't set yet! 00:05:56.092 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:56.092 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:05:56.092 POWER: Cannot get available frequencies of lcore 0 00:05:56.108 POWER: Attempting to initialise PSTAT power management... 00:05:56.108 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:56.108 POWER: Initialized successfully for lcore 0 power management 00:05:56.108 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:56.108 POWER: Initialized successfully for lcore 1 power management 00:05:56.108 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:56.108 POWER: Initialized successfully for lcore 2 power management 00:05:56.108 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:56.108 POWER: Initialized successfully for lcore 3 power management 00:05:56.108 [2024-07-20 17:41:30.696976] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:56.108 [2024-07-20 17:41:30.696993] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:56.108 [2024-07-20 17:41:30.697004] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:56.108 17:41:30 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.108 17:41:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:56.108 17:41:30 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.108 17:41:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.108 [2024-07-20 17:41:30.796603] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:56.108 17:41:30 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.108 17:41:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:56.108 17:41:30 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.108 17:41:30 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.108 17:41:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.108 ************************************ 00:05:56.108 START TEST scheduler_create_thread 00:05:56.108 ************************************ 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.108 2 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.108 3 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.108 4 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.108 5 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.108 6 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.108 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.366 7 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.367 8 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.367 9 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.367 10 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.367 17:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.623 17:41:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.623 17:41:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:56.623 17:41:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.623 17:41:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.530 17:41:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.530 17:41:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.530 17:41:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.530 17:41:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.530 17:41:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.461 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.461 00:05:59.461 real 0m3.100s 00:05:59.461 user 0m0.012s 00:05:59.461 sys 0m0.002s 00:05:59.461 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.461 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.461 ************************************ 00:05:59.461 END TEST scheduler_create_thread 00:05:59.461 ************************************ 00:05:59.461 17:41:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:59.461 17:41:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 819501 00:05:59.461 17:41:33 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 819501 ']' 00:05:59.461 17:41:33 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 819501 00:05:59.461 17:41:33 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:59.461 17:41:33 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.461 17:41:33 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 819501 00:05:59.461 17:41:33 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:59.461 17:41:33 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:59.461 17:41:33 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 819501' 00:05:59.461 killing process with pid 819501 00:05:59.461 17:41:33 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 819501 00:05:59.461 17:41:33 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 819501 00:05:59.718 [2024-07-20 17:41:34.304598] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.718 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:05:59.718 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:59.718 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:05:59.718 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:59.718 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:05:59.718 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:59.719 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:05:59.719 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:59.976 00:05:59.976 real 0m4.210s 00:05:59.976 user 0m6.931s 00:05:59.976 sys 0m0.316s 00:05:59.976 17:41:34 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.976 17:41:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.976 ************************************ 00:05:59.976 END TEST event_scheduler 00:05:59.976 ************************************ 00:05:59.976 17:41:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.976 17:41:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.976 17:41:34 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:59.976 17:41:34 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.976 17:41:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.976 ************************************ 00:05:59.976 START TEST app_repeat 00:05:59.976 ************************************ 00:05:59.976 17:41:34 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=820080 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 820080' 00:05:59.976 Process app_repeat pid: 820080 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:59.976 spdk_app_start Round 0 00:05:59.976 17:41:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 820080 /var/tmp/spdk-nbd.sock 00:05:59.976 17:41:34 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 820080 ']' 00:05:59.976 17:41:34 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.976 17:41:34 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:59.976 17:41:34 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.976 17:41:34 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:59.976 17:41:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.976 [2024-07-20 17:41:34.636077] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:59.976 [2024-07-20 17:41:34.636150] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid820080 ] 00:05:59.976 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.976 [2024-07-20 17:41:34.696925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.234 [2024-07-20 17:41:34.789086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.234 [2024-07-20 17:41:34.789093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.234 17:41:34 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.234 17:41:34 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:00.234 17:41:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.491 Malloc0 00:06:00.491 17:41:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.749 Malloc1 00:06:00.749 17:41:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.749 17:41:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.039 /dev/nbd0 00:06:01.039 17:41:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.039 17:41:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.039 1+0 records in 00:06:01.039 1+0 records out 00:06:01.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181492 s, 22.6 MB/s 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:01.039 17:41:35 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:01.039 17:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.039 17:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.039 17:41:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.296 /dev/nbd1 00:06:01.296 17:41:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.296 17:41:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.296 1+0 records in 00:06:01.296 1+0 records out 00:06:01.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215667 s, 19.0 MB/s 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:01.296 17:41:35 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:01.296 17:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.296 17:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.296 17:41:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.296 17:41:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.296 17:41:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.553 { 00:06:01.553 "nbd_device": "/dev/nbd0", 00:06:01.553 "bdev_name": "Malloc0" 00:06:01.553 }, 00:06:01.553 { 00:06:01.553 "nbd_device": "/dev/nbd1", 00:06:01.553 "bdev_name": "Malloc1" 00:06:01.553 } 00:06:01.553 ]' 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.553 { 00:06:01.553 "nbd_device": "/dev/nbd0", 00:06:01.553 "bdev_name": "Malloc0" 00:06:01.553 }, 00:06:01.553 { 00:06:01.553 "nbd_device": "/dev/nbd1", 00:06:01.553 "bdev_name": "Malloc1" 00:06:01.553 } 00:06:01.553 ]' 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.553 /dev/nbd1' 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.553 /dev/nbd1' 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.553 256+0 records in 00:06:01.553 256+0 records out 00:06:01.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502672 s, 209 MB/s 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.553 256+0 records in 00:06:01.553 256+0 records out 00:06:01.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237411 s, 44.2 MB/s 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.553 256+0 records in 00:06:01.553 256+0 records out 00:06:01.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217831 s, 48.1 MB/s 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.553 17:41:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.809 17:41:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.810 17:41:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.810 17:41:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.810 17:41:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.810 17:41:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.810 17:41:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.810 17:41:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.810 17:41:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.810 17:41:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.810 17:41:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.067 17:41:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.067 17:41:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.067 17:41:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.067 17:41:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.067 17:41:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.067 17:41:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.067 17:41:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.067 17:41:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.067 17:41:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.067 17:41:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.067 17:41:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.324 17:41:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.324 17:41:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.324 17:41:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.324 17:41:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.324 17:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.324 17:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.324 17:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.324 17:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.324 17:41:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.324 17:41:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.324 17:41:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.324 17:41:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.324 17:41:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.888 17:41:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.888 [2024-07-20 17:41:37.607965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.145 [2024-07-20 17:41:37.698469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.145 [2024-07-20 17:41:37.698469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.145 [2024-07-20 17:41:37.754678] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.145 [2024-07-20 17:41:37.754773] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.671 17:41:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.671 17:41:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:05.671 spdk_app_start Round 1 00:06:05.671 17:41:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 820080 /var/tmp/spdk-nbd.sock 00:06:05.671 17:41:40 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 820080 ']' 00:06:05.671 17:41:40 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.671 17:41:40 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.671 17:41:40 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.671 17:41:40 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.671 17:41:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.929 17:41:40 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.929 17:41:40 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:05.929 17:41:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.188 Malloc0 00:06:06.188 17:41:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.447 Malloc1 00:06:06.447 17:41:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.447 17:41:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.705 /dev/nbd0 00:06:06.705 17:41:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.705 17:41:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.705 1+0 records in 00:06:06.705 1+0 records out 00:06:06.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189372 s, 21.6 MB/s 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:06.705 17:41:41 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:06.705 17:41:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.705 17:41:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.705 17:41:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.963 /dev/nbd1 00:06:06.963 17:41:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.963 17:41:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.963 1+0 records in 00:06:06.963 1+0 records out 00:06:06.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243631 s, 16.8 MB/s 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:06.963 17:41:41 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:06.963 17:41:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.963 17:41:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.964 17:41:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.964 17:41:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.964 17:41:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.222 { 00:06:07.222 "nbd_device": "/dev/nbd0", 00:06:07.222 "bdev_name": "Malloc0" 00:06:07.222 }, 00:06:07.222 { 00:06:07.222 "nbd_device": "/dev/nbd1", 00:06:07.222 "bdev_name": "Malloc1" 00:06:07.222 } 00:06:07.222 ]' 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.222 { 00:06:07.222 "nbd_device": "/dev/nbd0", 00:06:07.222 "bdev_name": "Malloc0" 00:06:07.222 }, 00:06:07.222 { 00:06:07.222 "nbd_device": "/dev/nbd1", 00:06:07.222 "bdev_name": "Malloc1" 00:06:07.222 } 00:06:07.222 ]' 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.222 /dev/nbd1' 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.222 /dev/nbd1' 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.222 256+0 records in 00:06:07.222 256+0 records out 00:06:07.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379709 s, 276 MB/s 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.222 17:41:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.480 256+0 records in 00:06:07.480 256+0 records out 00:06:07.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214403 s, 48.9 MB/s 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.480 256+0 records in 00:06:07.480 256+0 records out 00:06:07.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255605 s, 41.0 MB/s 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.480 17:41:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.739 17:41:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.739 17:41:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.739 17:41:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.739 17:41:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.739 17:41:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.739 17:41:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.739 17:41:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.739 17:41:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.739 17:41:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.739 17:41:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.997 17:41:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.997 17:41:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.997 17:41:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.997 17:41:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.997 17:41:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.997 17:41:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.997 17:41:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.997 17:41:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.997 17:41:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.997 17:41:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.997 17:41:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.256 17:41:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.256 17:41:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.256 17:41:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.256 17:41:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.256 17:41:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.256 17:41:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.256 17:41:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.256 17:41:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.256 17:41:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.256 17:41:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.256 17:41:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.256 17:41:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.256 17:41:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.514 17:41:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.771 [2024-07-20 17:41:43.327787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.771 [2024-07-20 17:41:43.417666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.771 [2024-07-20 17:41:43.417680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.771 [2024-07-20 17:41:43.480338] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.771 [2024-07-20 17:41:43.480420] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.066 17:41:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:12.066 17:41:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:12.066 spdk_app_start Round 2 00:06:12.066 17:41:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 820080 /var/tmp/spdk-nbd.sock 00:06:12.066 17:41:46 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 820080 ']' 00:06:12.066 17:41:46 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.066 17:41:46 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.066 17:41:46 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.066 17:41:46 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.066 17:41:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.066 17:41:46 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.066 17:41:46 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:12.066 17:41:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.066 Malloc0 00:06:12.066 17:41:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.324 Malloc1 00:06:12.324 17:41:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.324 17:41:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.324 17:41:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.324 17:41:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.324 17:41:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.324 17:41:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.324 17:41:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.324 17:41:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.324 17:41:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.324 17:41:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.325 17:41:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.325 17:41:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.325 17:41:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.325 17:41:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.325 17:41:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.325 17:41:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.582 /dev/nbd0 00:06:12.582 17:41:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.582 17:41:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.582 17:41:47 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:12.582 17:41:47 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:12.582 17:41:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:12.582 17:41:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:12.582 17:41:47 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:12.582 17:41:47 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:12.582 17:41:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:12.582 17:41:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:12.582 17:41:47 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.582 1+0 records in 00:06:12.582 1+0 records out 00:06:12.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000141691 s, 28.9 MB/s 00:06:12.583 17:41:47 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.583 17:41:47 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:12.583 17:41:47 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.583 17:41:47 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:12.583 17:41:47 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:12.583 17:41:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.583 17:41:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.583 17:41:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.842 /dev/nbd1 00:06:12.842 17:41:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.842 17:41:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.842 1+0 records in 00:06:12.842 1+0 records out 00:06:12.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175111 s, 23.4 MB/s 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:12.842 17:41:47 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:12.842 17:41:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.842 17:41:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.842 17:41:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.842 17:41:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.842 17:41:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.099 17:41:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.099 { 00:06:13.099 "nbd_device": "/dev/nbd0", 00:06:13.099 "bdev_name": "Malloc0" 00:06:13.099 }, 00:06:13.099 { 00:06:13.099 "nbd_device": "/dev/nbd1", 00:06:13.099 "bdev_name": "Malloc1" 00:06:13.099 } 00:06:13.099 ]' 00:06:13.099 17:41:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.099 { 00:06:13.099 "nbd_device": "/dev/nbd0", 00:06:13.099 "bdev_name": "Malloc0" 00:06:13.099 }, 00:06:13.099 { 00:06:13.099 "nbd_device": "/dev/nbd1", 00:06:13.099 "bdev_name": "Malloc1" 00:06:13.099 } 00:06:13.099 ]' 00:06:13.099 17:41:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.100 /dev/nbd1' 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.100 /dev/nbd1' 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.100 256+0 records in 00:06:13.100 256+0 records out 00:06:13.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.004723 s, 222 MB/s 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.100 256+0 records in 00:06:13.100 256+0 records out 00:06:13.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213844 s, 49.0 MB/s 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.100 256+0 records in 00:06:13.100 256+0 records out 00:06:13.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239297 s, 43.8 MB/s 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.100 17:41:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.356 17:41:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.356 17:41:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.356 17:41:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.356 17:41:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.356 17:41:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.356 17:41:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.356 17:41:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.356 17:41:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.356 17:41:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.356 17:41:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.612 17:41:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.612 17:41:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.612 17:41:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.612 17:41:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.612 17:41:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.612 17:41:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.612 17:41:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.612 17:41:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.612 17:41:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.612 17:41:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.612 17:41:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.895 17:41:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.895 17:41:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.895 17:41:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.895 17:41:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.895 17:41:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.895 17:41:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.895 17:41:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.895 17:41:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.895 17:41:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.895 17:41:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.895 17:41:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.895 17:41:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.895 17:41:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.152 17:41:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.409 [2024-07-20 17:41:49.072484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.409 [2024-07-20 17:41:49.161737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.409 [2024-07-20 17:41:49.161738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.665 [2024-07-20 17:41:49.224432] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.665 [2024-07-20 17:41:49.224514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.256 17:41:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 820080 /var/tmp/spdk-nbd.sock 00:06:17.256 17:41:51 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 820080 ']' 00:06:17.256 17:41:51 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.256 17:41:51 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.256 17:41:51 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.256 17:41:51 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.256 17:41:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.515 17:41:52 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:17.515 17:41:52 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:17.515 17:41:52 event.app_repeat -- event/event.sh@39 -- # killprocess 820080 00:06:17.515 17:41:52 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 820080 ']' 00:06:17.515 17:41:52 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 820080 00:06:17.515 17:41:52 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:17.515 17:41:52 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:17.515 17:41:52 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 820080 00:06:17.515 17:41:52 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:17.515 17:41:52 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:17.515 17:41:52 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 820080' 00:06:17.515 killing process with pid 820080 00:06:17.515 17:41:52 event.app_repeat -- common/autotest_common.sh@965 -- # kill 820080 00:06:17.515 17:41:52 event.app_repeat -- common/autotest_common.sh@970 -- # wait 820080 00:06:17.774 spdk_app_start is called in Round 0. 00:06:17.774 Shutdown signal received, stop current app iteration 00:06:17.774 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:17.774 spdk_app_start is called in Round 1. 00:06:17.774 Shutdown signal received, stop current app iteration 00:06:17.774 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:17.774 spdk_app_start is called in Round 2. 00:06:17.774 Shutdown signal received, stop current app iteration 00:06:17.774 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:17.774 spdk_app_start is called in Round 3. 00:06:17.774 Shutdown signal received, stop current app iteration 00:06:17.774 17:41:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:17.774 17:41:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:17.774 00:06:17.774 real 0m17.712s 00:06:17.774 user 0m38.930s 00:06:17.774 sys 0m3.336s 00:06:17.774 17:41:52 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.774 17:41:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.774 ************************************ 00:06:17.774 END TEST app_repeat 00:06:17.774 ************************************ 00:06:17.774 17:41:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:17.774 17:41:52 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.774 17:41:52 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.774 17:41:52 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.774 17:41:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.774 ************************************ 00:06:17.774 START TEST cpu_locks 00:06:17.774 ************************************ 00:06:17.774 17:41:52 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.774 * Looking for test storage... 00:06:17.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:17.774 17:41:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:17.774 17:41:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:17.774 17:41:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:17.774 17:41:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:17.774 17:41:52 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.774 17:41:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.774 17:41:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.774 ************************************ 00:06:17.774 START TEST default_locks 00:06:17.774 ************************************ 00:06:17.774 17:41:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:17.774 17:41:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=822429 00:06:17.774 17:41:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.774 17:41:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 822429 00:06:17.774 17:41:52 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 822429 ']' 00:06:17.774 17:41:52 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.774 17:41:52 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.774 17:41:52 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.774 17:41:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.774 17:41:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.774 [2024-07-20 17:41:52.498558] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:17.774 [2024-07-20 17:41:52.498639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822429 ] 00:06:17.774 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.774 [2024-07-20 17:41:52.556005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.033 [2024-07-20 17:41:52.640967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.291 17:41:52 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.291 17:41:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:18.291 17:41:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 822429 00:06:18.291 17:41:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 822429 00:06:18.291 17:41:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.548 lslocks: write error 00:06:18.548 17:41:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 822429 00:06:18.548 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 822429 ']' 00:06:18.548 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 822429 00:06:18.548 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:18.549 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.549 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 822429 00:06:18.549 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:18.549 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:18.549 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 822429' 00:06:18.549 killing process with pid 822429 00:06:18.549 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 822429 00:06:18.549 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 822429 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 822429 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 822429 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 822429 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 822429 ']' 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (822429) - No such process 00:06:19.112 ERROR: process (pid: 822429) is no longer running 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.112 00:06:19.112 real 0m1.299s 00:06:19.112 user 0m1.247s 00:06:19.112 sys 0m0.545s 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.112 17:41:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.112 ************************************ 00:06:19.112 END TEST default_locks 00:06:19.112 ************************************ 00:06:19.112 17:41:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:19.112 17:41:53 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.112 17:41:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.112 17:41:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.112 ************************************ 00:06:19.112 START TEST default_locks_via_rpc 00:06:19.112 ************************************ 00:06:19.112 17:41:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:19.112 17:41:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=822599 00:06:19.112 17:41:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.112 17:41:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 822599 00:06:19.112 17:41:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 822599 ']' 00:06:19.112 17:41:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.112 17:41:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.112 17:41:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.112 17:41:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.112 17:41:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.112 [2024-07-20 17:41:53.852548] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:19.112 [2024-07-20 17:41:53.852635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822599 ] 00:06:19.112 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.370 [2024-07-20 17:41:53.914256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.370 [2024-07-20 17:41:54.002176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 822599 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 822599 00:06:19.627 17:41:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.884 17:41:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 822599 00:06:19.884 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 822599 ']' 00:06:19.884 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 822599 00:06:19.884 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:19.884 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:19.884 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 822599 00:06:19.884 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:19.884 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:19.884 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 822599' 00:06:19.884 killing process with pid 822599 00:06:19.884 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 822599 00:06:19.884 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 822599 00:06:20.450 00:06:20.450 real 0m1.185s 00:06:20.450 user 0m1.114s 00:06:20.450 sys 0m0.519s 00:06:20.450 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.450 17:41:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.450 ************************************ 00:06:20.450 END TEST default_locks_via_rpc 00:06:20.450 ************************************ 00:06:20.450 17:41:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.450 17:41:55 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:20.450 17:41:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.450 17:41:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.450 ************************************ 00:06:20.450 START TEST non_locking_app_on_locked_coremask 00:06:20.450 ************************************ 00:06:20.450 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:20.450 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=822759 00:06:20.450 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.450 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 822759 /var/tmp/spdk.sock 00:06:20.450 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 822759 ']' 00:06:20.450 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.450 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.450 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.450 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.450 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.450 [2024-07-20 17:41:55.083212] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:20.450 [2024-07-20 17:41:55.083312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822759 ] 00:06:20.450 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.450 [2024-07-20 17:41:55.141005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.450 [2024-07-20 17:41:55.230023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.707 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.707 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:20.707 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=822777 00:06:20.707 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:20.707 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 822777 /var/tmp/spdk2.sock 00:06:20.707 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 822777 ']' 00:06:20.707 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.707 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.707 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.707 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.707 17:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.964 [2024-07-20 17:41:55.534061] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:20.964 [2024-07-20 17:41:55.534165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822777 ] 00:06:20.964 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.964 [2024-07-20 17:41:55.625861] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.964 [2024-07-20 17:41:55.625894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.222 [2024-07-20 17:41:55.809393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.789 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:21.789 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:21.789 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 822759 00:06:21.789 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 822759 00:06:21.789 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.046 lslocks: write error 00:06:22.046 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 822759 00:06:22.046 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 822759 ']' 00:06:22.046 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 822759 00:06:22.046 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:22.046 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:22.046 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 822759 00:06:22.304 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:22.304 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:22.304 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 822759' 00:06:22.304 killing process with pid 822759 00:06:22.304 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 822759 00:06:22.304 17:41:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 822759 00:06:23.236 17:41:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 822777 00:06:23.236 17:41:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 822777 ']' 00:06:23.236 17:41:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 822777 00:06:23.236 17:41:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:23.236 17:41:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:23.236 17:41:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 822777 00:06:23.236 17:41:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:23.236 17:41:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:23.236 17:41:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 822777' 00:06:23.236 killing process with pid 822777 00:06:23.236 17:41:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 822777 00:06:23.236 17:41:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 822777 00:06:23.494 00:06:23.494 real 0m3.077s 00:06:23.494 user 0m3.197s 00:06:23.494 sys 0m1.057s 00:06:23.494 17:41:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.494 17:41:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.494 ************************************ 00:06:23.494 END TEST non_locking_app_on_locked_coremask 00:06:23.494 ************************************ 00:06:23.494 17:41:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:23.494 17:41:58 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.494 17:41:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.494 17:41:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.494 ************************************ 00:06:23.494 START TEST locking_app_on_unlocked_coremask 00:06:23.494 ************************************ 00:06:23.494 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:23.494 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=823193 00:06:23.494 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:23.494 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 823193 /var/tmp/spdk.sock 00:06:23.494 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 823193 ']' 00:06:23.494 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.494 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.494 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.494 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.494 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.494 [2024-07-20 17:41:58.212889] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:23.494 [2024-07-20 17:41:58.212966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823193 ] 00:06:23.494 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.494 [2024-07-20 17:41:58.273867] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.494 [2024-07-20 17:41:58.273905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.752 [2024-07-20 17:41:58.362373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.010 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.010 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:24.010 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=823196 00:06:24.010 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.010 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 823196 /var/tmp/spdk2.sock 00:06:24.010 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 823196 ']' 00:06:24.010 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.010 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.010 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.010 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.010 17:41:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.010 [2024-07-20 17:41:58.672841] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:24.010 [2024-07-20 17:41:58.672927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823196 ] 00:06:24.010 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.010 [2024-07-20 17:41:58.767289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.267 [2024-07-20 17:41:58.956880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.832 17:41:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.832 17:41:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:24.832 17:41:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 823196 00:06:24.832 17:41:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 823196 00:06:24.832 17:41:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.395 lslocks: write error 00:06:25.395 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 823193 00:06:25.395 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 823193 ']' 00:06:25.395 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 823193 00:06:25.395 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:25.395 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.395 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 823193 00:06:25.395 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.395 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.395 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 823193' 00:06:25.395 killing process with pid 823193 00:06:25.395 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 823193 00:06:25.395 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 823193 00:06:26.326 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 823196 00:06:26.326 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 823196 ']' 00:06:26.326 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 823196 00:06:26.326 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:26.326 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.326 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 823196 00:06:26.326 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:26.326 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:26.326 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 823196' 00:06:26.326 killing process with pid 823196 00:06:26.326 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 823196 00:06:26.326 17:42:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 823196 00:06:26.583 00:06:26.583 real 0m3.203s 00:06:26.583 user 0m3.358s 00:06:26.583 sys 0m1.074s 00:06:26.583 17:42:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.583 17:42:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.583 ************************************ 00:06:26.583 END TEST locking_app_on_unlocked_coremask 00:06:26.583 ************************************ 00:06:26.840 17:42:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:26.840 17:42:01 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:26.840 17:42:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.840 17:42:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.840 ************************************ 00:06:26.840 START TEST locking_app_on_locked_coremask 00:06:26.840 ************************************ 00:06:26.840 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:26.840 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=823716 00:06:26.840 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.840 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 823716 /var/tmp/spdk.sock 00:06:26.840 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 823716 ']' 00:06:26.840 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.840 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.840 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.840 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.840 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.840 [2024-07-20 17:42:01.463609] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:26.840 [2024-07-20 17:42:01.463703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823716 ] 00:06:26.840 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.840 [2024-07-20 17:42:01.521219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.840 [2024-07-20 17:42:01.608429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=823749 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 823749 /var/tmp/spdk2.sock 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 823749 /var/tmp/spdk2.sock 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 823749 /var/tmp/spdk2.sock 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 823749 ']' 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.097 17:42:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.354 [2024-07-20 17:42:01.906271] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:27.354 [2024-07-20 17:42:01.906361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823749 ] 00:06:27.354 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.355 [2024-07-20 17:42:01.996949] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 823716 has claimed it. 00:06:27.355 [2024-07-20 17:42:01.996998] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (823749) - No such process 00:06:27.919 ERROR: process (pid: 823749) is no longer running 00:06:27.919 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.919 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:27.919 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:27.919 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.919 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:27.919 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.919 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 823716 00:06:27.919 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 823716 00:06:27.919 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.176 lslocks: write error 00:06:28.176 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 823716 00:06:28.176 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 823716 ']' 00:06:28.177 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 823716 00:06:28.177 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:28.177 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.177 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 823716 00:06:28.177 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:28.177 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:28.177 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 823716' 00:06:28.177 killing process with pid 823716 00:06:28.177 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 823716 00:06:28.177 17:42:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 823716 00:06:28.741 00:06:28.741 real 0m1.905s 00:06:28.741 user 0m2.065s 00:06:28.741 sys 0m0.616s 00:06:28.741 17:42:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.741 17:42:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.741 ************************************ 00:06:28.741 END TEST locking_app_on_locked_coremask 00:06:28.741 ************************************ 00:06:28.741 17:42:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:28.741 17:42:03 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.741 17:42:03 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.741 17:42:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.741 ************************************ 00:06:28.741 START TEST locking_overlapped_coremask 00:06:28.741 ************************************ 00:06:28.741 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:28.741 17:42:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=823986 00:06:28.741 17:42:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:28.741 17:42:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 823986 /var/tmp/spdk.sock 00:06:28.741 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 823986 ']' 00:06:28.741 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.741 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.741 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.741 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.741 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.741 [2024-07-20 17:42:03.412721] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:28.741 [2024-07-20 17:42:03.412830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823986 ] 00:06:28.741 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.741 [2024-07-20 17:42:03.473238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.998 [2024-07-20 17:42:03.566789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.999 [2024-07-20 17:42:03.566857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.999 [2024-07-20 17:42:03.566861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=824043 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 824043 /var/tmp/spdk2.sock 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 824043 /var/tmp/spdk2.sock 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 824043 /var/tmp/spdk2.sock 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 824043 ']' 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.255 17:42:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.255 [2024-07-20 17:42:03.869930] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:29.255 [2024-07-20 17:42:03.870041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824043 ] 00:06:29.255 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.255 [2024-07-20 17:42:03.959284] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 823986 has claimed it. 00:06:29.255 [2024-07-20 17:42:03.959341] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (824043) - No such process 00:06:29.819 ERROR: process (pid: 824043) is no longer running 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 823986 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 823986 ']' 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 823986 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 823986 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 823986' 00:06:29.819 killing process with pid 823986 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 823986 00:06:29.819 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 823986 00:06:30.403 00:06:30.403 real 0m1.606s 00:06:30.403 user 0m4.308s 00:06:30.403 sys 0m0.460s 00:06:30.403 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.403 17:42:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.403 ************************************ 00:06:30.403 END TEST locking_overlapped_coremask 00:06:30.403 ************************************ 00:06:30.403 17:42:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:30.403 17:42:04 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.403 17:42:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.403 17:42:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.403 ************************************ 00:06:30.403 START TEST locking_overlapped_coremask_via_rpc 00:06:30.403 ************************************ 00:06:30.403 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:30.403 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=824211 00:06:30.403 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:30.403 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 824211 /var/tmp/spdk.sock 00:06:30.403 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 824211 ']' 00:06:30.403 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.403 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.403 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.403 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.403 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.403 [2024-07-20 17:42:05.069242] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:30.403 [2024-07-20 17:42:05.069348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824211 ] 00:06:30.403 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.403 [2024-07-20 17:42:05.131433] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.403 [2024-07-20 17:42:05.131471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.662 [2024-07-20 17:42:05.221929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.662 [2024-07-20 17:42:05.221985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.662 [2024-07-20 17:42:05.221989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.919 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.919 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:30.919 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=824220 00:06:30.919 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:30.919 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 824220 /var/tmp/spdk2.sock 00:06:30.919 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 824220 ']' 00:06:30.919 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.919 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.919 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.919 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.919 17:42:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.919 [2024-07-20 17:42:05.530231] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:30.919 [2024-07-20 17:42:05.530325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824220 ] 00:06:30.919 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.919 [2024-07-20 17:42:05.622331] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.919 [2024-07-20 17:42:05.622375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.175 [2024-07-20 17:42:05.803998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.175 [2024-07-20 17:42:05.804052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.175 [2024-07-20 17:42:05.804054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.737 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.738 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.738 [2024-07-20 17:42:06.487884] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 824211 has claimed it. 00:06:31.738 request: 00:06:31.738 { 00:06:31.738 "method": "framework_enable_cpumask_locks", 00:06:31.738 "req_id": 1 00:06:31.738 } 00:06:31.738 Got JSON-RPC error response 00:06:31.738 response: 00:06:31.738 { 00:06:31.738 "code": -32603, 00:06:31.738 "message": "Failed to claim CPU core: 2" 00:06:31.738 } 00:06:31.738 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:31.738 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:31.738 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:31.738 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:31.738 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:31.738 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 824211 /var/tmp/spdk.sock 00:06:31.738 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 824211 ']' 00:06:31.738 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.738 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.738 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.738 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.738 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.994 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.994 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:31.994 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 824220 /var/tmp/spdk2.sock 00:06:31.994 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 824220 ']' 00:06:31.994 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.994 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.994 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.994 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.994 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.250 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.250 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:32.250 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:32.250 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.250 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.250 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.250 00:06:32.250 real 0m1.981s 00:06:32.250 user 0m1.040s 00:06:32.250 sys 0m0.154s 00:06:32.250 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.250 17:42:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.250 ************************************ 00:06:32.250 END TEST locking_overlapped_coremask_via_rpc 00:06:32.250 ************************************ 00:06:32.250 17:42:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:32.250 17:42:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 824211 ]] 00:06:32.250 17:42:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 824211 00:06:32.250 17:42:07 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 824211 ']' 00:06:32.250 17:42:07 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 824211 00:06:32.250 17:42:07 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:32.250 17:42:07 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.251 17:42:07 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 824211 00:06:32.251 17:42:07 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.251 17:42:07 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.251 17:42:07 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 824211' 00:06:32.251 killing process with pid 824211 00:06:32.251 17:42:07 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 824211 00:06:32.251 17:42:07 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 824211 00:06:32.814 17:42:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 824220 ]] 00:06:32.814 17:42:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 824220 00:06:32.814 17:42:07 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 824220 ']' 00:06:32.814 17:42:07 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 824220 00:06:32.814 17:42:07 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:32.814 17:42:07 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.814 17:42:07 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 824220 00:06:32.814 17:42:07 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:32.814 17:42:07 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:32.814 17:42:07 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 824220' 00:06:32.814 killing process with pid 824220 00:06:32.814 17:42:07 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 824220 00:06:32.814 17:42:07 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 824220 00:06:33.072 17:42:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.072 17:42:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:33.072 17:42:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 824211 ]] 00:06:33.072 17:42:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 824211 00:06:33.072 17:42:07 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 824211 ']' 00:06:33.072 17:42:07 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 824211 00:06:33.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (824211) - No such process 00:06:33.072 17:42:07 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 824211 is not found' 00:06:33.072 Process with pid 824211 is not found 00:06:33.072 17:42:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 824220 ]] 00:06:33.072 17:42:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 824220 00:06:33.072 17:42:07 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 824220 ']' 00:06:33.072 17:42:07 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 824220 00:06:33.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (824220) - No such process 00:06:33.072 17:42:07 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 824220 is not found' 00:06:33.072 Process with pid 824220 is not found 00:06:33.072 17:42:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.072 00:06:33.072 real 0m15.481s 00:06:33.072 user 0m26.978s 00:06:33.072 sys 0m5.333s 00:06:33.072 17:42:07 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.072 17:42:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.072 ************************************ 00:06:33.072 END TEST cpu_locks 00:06:33.072 ************************************ 00:06:33.330 00:06:33.330 real 0m41.493s 00:06:33.330 user 1m19.447s 00:06:33.330 sys 0m9.472s 00:06:33.330 17:42:07 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.330 17:42:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.330 ************************************ 00:06:33.330 END TEST event 00:06:33.330 ************************************ 00:06:33.330 17:42:07 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:33.330 17:42:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.330 17:42:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.330 17:42:07 -- common/autotest_common.sh@10 -- # set +x 00:06:33.330 ************************************ 00:06:33.330 START TEST thread 00:06:33.330 ************************************ 00:06:33.330 17:42:07 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:33.330 * Looking for test storage... 00:06:33.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:33.330 17:42:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.330 17:42:07 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:33.330 17:42:07 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.330 17:42:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.330 ************************************ 00:06:33.330 START TEST thread_poller_perf 00:06:33.330 ************************************ 00:06:33.330 17:42:07 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.330 [2024-07-20 17:42:08.001870] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:33.330 [2024-07-20 17:42:08.001929] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825019 ] 00:06:33.330 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.330 [2024-07-20 17:42:08.060733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.588 [2024-07-20 17:42:08.152224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.588 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:34.521 ====================================== 00:06:34.521 busy:2710326785 (cyc) 00:06:34.521 total_run_count: 297000 00:06:34.521 tsc_hz: 2700000000 (cyc) 00:06:34.521 ====================================== 00:06:34.521 poller_cost: 9125 (cyc), 3379 (nsec) 00:06:34.521 00:06:34.521 real 0m1.246s 00:06:34.521 user 0m1.162s 00:06:34.521 sys 0m0.079s 00:06:34.521 17:42:09 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.521 17:42:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.521 ************************************ 00:06:34.521 END TEST thread_poller_perf 00:06:34.521 ************************************ 00:06:34.521 17:42:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.521 17:42:09 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:34.521 17:42:09 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.521 17:42:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.521 ************************************ 00:06:34.521 START TEST thread_poller_perf 00:06:34.521 ************************************ 00:06:34.521 17:42:09 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.521 [2024-07-20 17:42:09.289230] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:34.521 [2024-07-20 17:42:09.289297] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825365 ] 00:06:34.521 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.778 [2024-07-20 17:42:09.348636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.778 [2024-07-20 17:42:09.441218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.778 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:36.150 ====================================== 00:06:36.150 busy:2702625498 (cyc) 00:06:36.150 total_run_count: 3884000 00:06:36.150 tsc_hz: 2700000000 (cyc) 00:06:36.150 ====================================== 00:06:36.150 poller_cost: 695 (cyc), 257 (nsec) 00:06:36.150 00:06:36.150 real 0m1.240s 00:06:36.150 user 0m1.157s 00:06:36.150 sys 0m0.078s 00:06:36.150 17:42:10 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.150 17:42:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.150 ************************************ 00:06:36.150 END TEST thread_poller_perf 00:06:36.150 ************************************ 00:06:36.150 17:42:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:36.150 00:06:36.150 real 0m2.617s 00:06:36.150 user 0m2.371s 00:06:36.150 sys 0m0.244s 00:06:36.150 17:42:10 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.150 17:42:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.150 ************************************ 00:06:36.150 END TEST thread 00:06:36.151 ************************************ 00:06:36.151 17:42:10 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:36.151 17:42:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:36.151 17:42:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.151 17:42:10 -- common/autotest_common.sh@10 -- # set +x 00:06:36.151 ************************************ 00:06:36.151 START TEST accel 00:06:36.151 ************************************ 00:06:36.151 17:42:10 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:36.151 * Looking for test storage... 00:06:36.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:36.151 17:42:10 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:36.151 17:42:10 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:36.151 17:42:10 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:36.151 17:42:10 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=825563 00:06:36.151 17:42:10 accel -- accel/accel.sh@63 -- # waitforlisten 825563 00:06:36.151 17:42:10 accel -- common/autotest_common.sh@827 -- # '[' -z 825563 ']' 00:06:36.151 17:42:10 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:36.151 17:42:10 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.151 17:42:10 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:36.151 17:42:10 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:36.151 17:42:10 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.151 17:42:10 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.151 17:42:10 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.151 17:42:10 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:36.151 17:42:10 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.151 17:42:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.151 17:42:10 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.151 17:42:10 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.151 17:42:10 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:36.151 17:42:10 accel -- accel/accel.sh@41 -- # jq -r . 00:06:36.151 [2024-07-20 17:42:10.692704] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:36.151 [2024-07-20 17:42:10.692772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825563 ] 00:06:36.151 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.151 [2024-07-20 17:42:10.753522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.151 [2024-07-20 17:42:10.843563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.409 17:42:11 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:36.409 17:42:11 accel -- common/autotest_common.sh@860 -- # return 0 00:06:36.409 17:42:11 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:36.409 17:42:11 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:36.409 17:42:11 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:36.409 17:42:11 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:36.409 17:42:11 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:36.409 17:42:11 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:36.409 17:42:11 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.409 17:42:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.409 17:42:11 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:36.409 17:42:11 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.409 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.409 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.409 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.409 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.409 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.409 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.409 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.409 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.409 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.409 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.409 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.409 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.409 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.409 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.409 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.409 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.409 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.409 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.409 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.409 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.409 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.409 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.409 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.409 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.410 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.410 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.410 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.410 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.410 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.410 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.410 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.410 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.410 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.410 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.410 17:42:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.410 17:42:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:36.410 17:42:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:36.410 17:42:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:36.410 17:42:11 accel -- accel/accel.sh@75 -- # killprocess 825563 00:06:36.410 17:42:11 accel -- common/autotest_common.sh@946 -- # '[' -z 825563 ']' 00:06:36.410 17:42:11 accel -- common/autotest_common.sh@950 -- # kill -0 825563 00:06:36.410 17:42:11 accel -- common/autotest_common.sh@951 -- # uname 00:06:36.410 17:42:11 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:36.410 17:42:11 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 825563 00:06:36.410 17:42:11 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:36.410 17:42:11 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:36.410 17:42:11 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 825563' 00:06:36.410 killing process with pid 825563 00:06:36.410 17:42:11 accel -- common/autotest_common.sh@965 -- # kill 825563 00:06:36.410 17:42:11 accel -- common/autotest_common.sh@970 -- # wait 825563 00:06:37.036 17:42:11 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:37.036 17:42:11 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:37.036 17:42:11 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:37.036 17:42:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.036 17:42:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.036 17:42:11 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:37.036 17:42:11 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:37.036 17:42:11 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:37.036 17:42:11 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.036 17:42:11 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.036 17:42:11 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.036 17:42:11 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.036 17:42:11 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.036 17:42:11 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:37.036 17:42:11 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:37.036 17:42:11 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.036 17:42:11 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:37.036 17:42:11 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:37.036 17:42:11 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:37.036 17:42:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.036 17:42:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.036 ************************************ 00:06:37.036 START TEST accel_missing_filename 00:06:37.036 ************************************ 00:06:37.036 17:42:11 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:37.036 17:42:11 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:37.036 17:42:11 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:37.036 17:42:11 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:37.036 17:42:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.036 17:42:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:37.036 17:42:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.036 17:42:11 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:37.036 17:42:11 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:37.036 17:42:11 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:37.036 17:42:11 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.036 17:42:11 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.036 17:42:11 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.036 17:42:11 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.036 17:42:11 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.036 17:42:11 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:37.036 17:42:11 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:37.036 [2024-07-20 17:42:11.674480] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:37.036 [2024-07-20 17:42:11.674541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825732 ] 00:06:37.036 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.036 [2024-07-20 17:42:11.737641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.036 [2024-07-20 17:42:11.830806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.294 [2024-07-20 17:42:11.892454] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.294 [2024-07-20 17:42:11.973144] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:37.294 A filename is required. 00:06:37.294 17:42:12 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:37.294 17:42:12 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.294 17:42:12 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:37.294 17:42:12 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:37.294 17:42:12 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:37.294 17:42:12 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.294 00:06:37.294 real 0m0.401s 00:06:37.294 user 0m0.282s 00:06:37.294 sys 0m0.152s 00:06:37.294 17:42:12 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.294 17:42:12 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:37.294 ************************************ 00:06:37.294 END TEST accel_missing_filename 00:06:37.294 ************************************ 00:06:37.294 17:42:12 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.294 17:42:12 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:37.294 17:42:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.294 17:42:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.552 ************************************ 00:06:37.552 START TEST accel_compress_verify 00:06:37.552 ************************************ 00:06:37.552 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.552 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:37.552 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.552 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:37.552 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.552 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:37.552 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.552 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.553 17:42:12 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.553 17:42:12 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:37.553 17:42:12 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.553 17:42:12 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.553 17:42:12 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.553 17:42:12 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.553 17:42:12 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.553 17:42:12 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:37.553 17:42:12 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:37.553 [2024-07-20 17:42:12.126125] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:37.553 [2024-07-20 17:42:12.126189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825760 ] 00:06:37.553 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.553 [2024-07-20 17:42:12.188966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.553 [2024-07-20 17:42:12.279954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.553 [2024-07-20 17:42:12.339604] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.811 [2024-07-20 17:42:12.416288] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:37.811 00:06:37.811 Compression does not support the verify option, aborting. 00:06:37.811 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:37.811 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.811 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:37.811 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:37.811 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:37.811 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.811 00:06:37.811 real 0m0.393s 00:06:37.811 user 0m0.283s 00:06:37.811 sys 0m0.145s 00:06:37.811 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.811 17:42:12 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:37.811 ************************************ 00:06:37.811 END TEST accel_compress_verify 00:06:37.811 ************************************ 00:06:37.811 17:42:12 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:37.811 17:42:12 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:37.811 17:42:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.811 17:42:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.811 ************************************ 00:06:37.811 START TEST accel_wrong_workload 00:06:37.811 ************************************ 00:06:37.811 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:37.811 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:37.811 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:37.811 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:37.811 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.811 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:37.811 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.811 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:37.811 17:42:12 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:37.811 17:42:12 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:37.811 17:42:12 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.811 17:42:12 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.811 17:42:12 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.811 17:42:12 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.811 17:42:12 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.811 17:42:12 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:37.811 17:42:12 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:37.811 Unsupported workload type: foobar 00:06:37.811 [2024-07-20 17:42:12.566357] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:37.811 accel_perf options: 00:06:37.811 [-h help message] 00:06:37.811 [-q queue depth per core] 00:06:37.811 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:37.811 [-T number of threads per core 00:06:37.811 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:37.811 [-t time in seconds] 00:06:37.811 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:37.811 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:37.811 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:37.811 [-l for compress/decompress workloads, name of uncompressed input file 00:06:37.811 [-S for crc32c workload, use this seed value (default 0) 00:06:37.811 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:37.811 [-f for fill workload, use this BYTE value (default 255) 00:06:37.811 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:37.811 [-y verify result if this switch is on] 00:06:37.811 [-a tasks to allocate per core (default: same value as -q)] 00:06:37.811 Can be used to spread operations across a wider range of memory. 00:06:37.811 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:37.812 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.812 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.812 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.812 00:06:37.812 real 0m0.022s 00:06:37.812 user 0m0.008s 00:06:37.812 sys 0m0.015s 00:06:37.812 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.812 17:42:12 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:37.812 ************************************ 00:06:37.812 END TEST accel_wrong_workload 00:06:37.812 ************************************ 00:06:37.812 Error: writing output failed: Broken pipe 00:06:37.812 17:42:12 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:37.812 17:42:12 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:37.812 17:42:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.812 17:42:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.073 ************************************ 00:06:38.073 START TEST accel_negative_buffers 00:06:38.073 ************************************ 00:06:38.073 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:38.073 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:38.073 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:38.073 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:38.073 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.073 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:38.073 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.073 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:38.073 17:42:12 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:38.073 17:42:12 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:38.073 17:42:12 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.073 17:42:12 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.073 17:42:12 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.073 17:42:12 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.073 17:42:12 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.074 17:42:12 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:38.074 17:42:12 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:38.074 -x option must be non-negative. 00:06:38.074 [2024-07-20 17:42:12.630529] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:38.074 accel_perf options: 00:06:38.074 [-h help message] 00:06:38.074 [-q queue depth per core] 00:06:38.074 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:38.074 [-T number of threads per core 00:06:38.074 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:38.074 [-t time in seconds] 00:06:38.074 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:38.074 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:38.074 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:38.074 [-l for compress/decompress workloads, name of uncompressed input file 00:06:38.074 [-S for crc32c workload, use this seed value (default 0) 00:06:38.074 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:38.074 [-f for fill workload, use this BYTE value (default 255) 00:06:38.074 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:38.074 [-y verify result if this switch is on] 00:06:38.074 [-a tasks to allocate per core (default: same value as -q)] 00:06:38.074 Can be used to spread operations across a wider range of memory. 00:06:38.074 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:38.074 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.074 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:38.074 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.074 00:06:38.074 real 0m0.022s 00:06:38.074 user 0m0.011s 00:06:38.074 sys 0m0.011s 00:06:38.074 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.074 17:42:12 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:38.074 ************************************ 00:06:38.074 END TEST accel_negative_buffers 00:06:38.074 ************************************ 00:06:38.074 Error: writing output failed: Broken pipe 00:06:38.074 17:42:12 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:38.074 17:42:12 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:38.074 17:42:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.074 17:42:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.074 ************************************ 00:06:38.074 START TEST accel_crc32c 00:06:38.074 ************************************ 00:06:38.074 17:42:12 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:38.074 17:42:12 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:38.074 [2024-07-20 17:42:12.688530] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:38.074 [2024-07-20 17:42:12.688579] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825940 ] 00:06:38.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.074 [2024-07-20 17:42:12.749317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.074 [2024-07-20 17:42:12.841250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.333 17:42:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:39.707 17:42:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.707 00:06:39.707 real 0m1.397s 00:06:39.707 user 0m1.257s 00:06:39.707 sys 0m0.143s 00:06:39.707 17:42:14 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.707 17:42:14 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:39.707 ************************************ 00:06:39.707 END TEST accel_crc32c 00:06:39.707 ************************************ 00:06:39.707 17:42:14 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:39.707 17:42:14 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:39.707 17:42:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.707 17:42:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.707 ************************************ 00:06:39.707 START TEST accel_crc32c_C2 00:06:39.707 ************************************ 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:39.707 [2024-07-20 17:42:14.130502] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:39.707 [2024-07-20 17:42:14.130568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826101 ] 00:06:39.707 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.707 [2024-07-20 17:42:14.191874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.707 [2024-07-20 17:42:14.284996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.707 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.708 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.708 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.708 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.708 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.708 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.708 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.708 17:42:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.081 00:06:41.081 real 0m1.406s 00:06:41.081 user 0m1.263s 00:06:41.081 sys 0m0.145s 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.081 17:42:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:41.081 ************************************ 00:06:41.081 END TEST accel_crc32c_C2 00:06:41.081 ************************************ 00:06:41.081 17:42:15 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:41.081 17:42:15 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:41.081 17:42:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.081 17:42:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.081 ************************************ 00:06:41.081 START TEST accel_copy 00:06:41.081 ************************************ 00:06:41.081 17:42:15 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:41.081 [2024-07-20 17:42:15.579495] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:41.081 [2024-07-20 17:42:15.579557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826254 ] 00:06:41.081 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.081 [2024-07-20 17:42:15.641597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.081 [2024-07-20 17:42:15.734335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.081 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.082 17:42:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:42.454 17:42:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.454 00:06:42.454 real 0m1.395s 00:06:42.455 user 0m1.258s 00:06:42.455 sys 0m0.139s 00:06:42.455 17:42:16 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.455 17:42:16 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:42.455 ************************************ 00:06:42.455 END TEST accel_copy 00:06:42.455 ************************************ 00:06:42.455 17:42:16 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:42.455 17:42:16 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:42.455 17:42:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.455 17:42:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.455 ************************************ 00:06:42.455 START TEST accel_fill 00:06:42.455 ************************************ 00:06:42.455 17:42:17 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:42.455 [2024-07-20 17:42:17.018853] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:42.455 [2024-07-20 17:42:17.018912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826526 ] 00:06:42.455 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.455 [2024-07-20 17:42:17.079531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.455 [2024-07-20 17:42:17.172418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.455 17:42:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:43.835 17:42:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.835 00:06:43.835 real 0m1.402s 00:06:43.835 user 0m1.258s 00:06:43.835 sys 0m0.145s 00:06:43.835 17:42:18 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.835 17:42:18 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:43.835 ************************************ 00:06:43.835 END TEST accel_fill 00:06:43.835 ************************************ 00:06:43.835 17:42:18 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:43.835 17:42:18 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:43.835 17:42:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.835 17:42:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.835 ************************************ 00:06:43.835 START TEST accel_copy_crc32c 00:06:43.835 ************************************ 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:43.835 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:43.835 [2024-07-20 17:42:18.463053] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:43.835 [2024-07-20 17:42:18.463130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826687 ] 00:06:43.835 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.835 [2024-07-20 17:42:18.527867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.835 [2024-07-20 17:42:18.619928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.094 17:42:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.522 00:06:45.522 real 0m1.410s 00:06:45.522 user 0m1.263s 00:06:45.522 sys 0m0.150s 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.522 17:42:19 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:45.522 ************************************ 00:06:45.522 END TEST accel_copy_crc32c 00:06:45.522 ************************************ 00:06:45.522 17:42:19 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:45.522 17:42:19 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:45.522 17:42:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.522 17:42:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.522 ************************************ 00:06:45.522 START TEST accel_copy_crc32c_C2 00:06:45.522 ************************************ 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:45.522 17:42:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:45.522 [2024-07-20 17:42:19.917497] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:45.522 [2024-07-20 17:42:19.917559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826840 ] 00:06:45.522 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.522 [2024-07-20 17:42:19.980224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.522 [2024-07-20 17:42:20.085939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.522 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.523 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.523 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.523 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.523 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.523 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.523 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.523 17:42:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.904 00:06:46.904 real 0m1.423s 00:06:46.904 user 0m1.277s 00:06:46.904 sys 0m0.149s 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.904 17:42:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:46.904 ************************************ 00:06:46.904 END TEST accel_copy_crc32c_C2 00:06:46.904 ************************************ 00:06:46.904 17:42:21 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:46.904 17:42:21 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:46.904 17:42:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.904 17:42:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.904 ************************************ 00:06:46.904 START TEST accel_dualcast 00:06:46.904 ************************************ 00:06:46.904 17:42:21 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:46.904 [2024-07-20 17:42:21.384547] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:46.904 [2024-07-20 17:42:21.384610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid827002 ] 00:06:46.904 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.904 [2024-07-20 17:42:21.446548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.904 [2024-07-20 17:42:21.542115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.904 17:42:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:48.279 17:42:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.279 00:06:48.279 real 0m1.403s 00:06:48.279 user 0m1.260s 00:06:48.279 sys 0m0.144s 00:06:48.279 17:42:22 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.279 17:42:22 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:48.279 ************************************ 00:06:48.279 END TEST accel_dualcast 00:06:48.279 ************************************ 00:06:48.279 17:42:22 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:48.279 17:42:22 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:48.279 17:42:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.279 17:42:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.279 ************************************ 00:06:48.279 START TEST accel_compare 00:06:48.279 ************************************ 00:06:48.279 17:42:22 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:48.279 17:42:22 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:48.279 17:42:22 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:48.279 17:42:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.280 17:42:22 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:48.280 17:42:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.280 17:42:22 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:48.280 17:42:22 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:48.280 17:42:22 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.280 17:42:22 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.280 17:42:22 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.280 17:42:22 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.280 17:42:22 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.280 17:42:22 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:48.280 17:42:22 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:48.280 [2024-07-20 17:42:22.829850] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:48.280 [2024-07-20 17:42:22.829910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid827268 ] 00:06:48.280 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.280 [2024-07-20 17:42:22.891392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.280 [2024-07-20 17:42:22.985068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.280 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.281 17:42:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:49.654 17:42:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.654 00:06:49.654 real 0m1.391s 00:06:49.654 user 0m1.254s 00:06:49.654 sys 0m0.138s 00:06:49.654 17:42:24 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.654 17:42:24 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:49.654 ************************************ 00:06:49.654 END TEST accel_compare 00:06:49.654 ************************************ 00:06:49.654 17:42:24 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:49.654 17:42:24 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:49.654 17:42:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.654 17:42:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.654 ************************************ 00:06:49.654 START TEST accel_xor 00:06:49.654 ************************************ 00:06:49.654 17:42:24 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:49.654 17:42:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:49.654 [2024-07-20 17:42:24.262950] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:49.654 [2024-07-20 17:42:24.263012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid827437 ] 00:06:49.654 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.654 [2024-07-20 17:42:24.324100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.654 [2024-07-20 17:42:24.416983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.913 17:42:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.286 00:06:51.286 real 0m1.410s 00:06:51.286 user 0m1.263s 00:06:51.286 sys 0m0.149s 00:06:51.286 17:42:25 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.286 17:42:25 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:51.286 ************************************ 00:06:51.286 END TEST accel_xor 00:06:51.286 ************************************ 00:06:51.286 17:42:25 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:51.286 17:42:25 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:51.286 17:42:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.286 17:42:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.286 ************************************ 00:06:51.286 START TEST accel_xor 00:06:51.286 ************************************ 00:06:51.286 17:42:25 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:51.286 [2024-07-20 17:42:25.724779] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:51.286 [2024-07-20 17:42:25.724865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid827588 ] 00:06:51.286 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.286 [2024-07-20 17:42:25.787888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.286 [2024-07-20 17:42:25.878908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:51.286 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.287 17:42:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:52.657 17:42:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.657 00:06:52.657 real 0m1.407s 00:06:52.657 user 0m1.266s 00:06:52.657 sys 0m0.144s 00:06:52.657 17:42:27 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.657 17:42:27 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:52.657 ************************************ 00:06:52.657 END TEST accel_xor 00:06:52.657 ************************************ 00:06:52.657 17:42:27 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:52.657 17:42:27 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:52.657 17:42:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.657 17:42:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.657 ************************************ 00:06:52.657 START TEST accel_dif_verify 00:06:52.657 ************************************ 00:06:52.657 17:42:27 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:52.657 17:42:27 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:52.657 17:42:27 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:52.657 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:52.658 [2024-07-20 17:42:27.174546] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:52.658 [2024-07-20 17:42:27.174608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid827798 ] 00:06:52.658 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.658 [2024-07-20 17:42:27.236210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.658 [2024-07-20 17:42:27.329110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.658 17:42:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:54.029 17:42:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.029 00:06:54.029 real 0m1.410s 00:06:54.029 user 0m1.260s 00:06:54.029 sys 0m0.154s 00:06:54.029 17:42:28 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.029 17:42:28 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:54.029 ************************************ 00:06:54.029 END TEST accel_dif_verify 00:06:54.029 ************************************ 00:06:54.029 17:42:28 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:54.029 17:42:28 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:54.029 17:42:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.029 17:42:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.029 ************************************ 00:06:54.029 START TEST accel_dif_generate 00:06:54.029 ************************************ 00:06:54.029 17:42:28 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:54.029 17:42:28 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:54.030 [2024-07-20 17:42:28.635958] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:54.030 [2024-07-20 17:42:28.636020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828017 ] 00:06:54.030 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.030 [2024-07-20 17:42:28.701994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.030 [2024-07-20 17:42:28.799483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.286 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.286 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.286 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.286 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.286 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.286 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.286 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.286 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.286 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:54.286 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.286 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.286 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.287 17:42:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:55.656 17:42:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.656 00:06:55.656 real 0m1.416s 00:06:55.657 user 0m1.276s 00:06:55.657 sys 0m0.145s 00:06:55.657 17:42:30 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.657 17:42:30 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:55.657 ************************************ 00:06:55.657 END TEST accel_dif_generate 00:06:55.657 ************************************ 00:06:55.657 17:42:30 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:55.657 17:42:30 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:55.657 17:42:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.657 17:42:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.657 ************************************ 00:06:55.657 START TEST accel_dif_generate_copy 00:06:55.657 ************************************ 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:55.657 [2024-07-20 17:42:30.090010] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:55.657 [2024-07-20 17:42:30.090070] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828178 ] 00:06:55.657 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.657 [2024-07-20 17:42:30.152892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.657 [2024-07-20 17:42:30.245988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.657 17:42:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.028 00:06:57.028 real 0m1.410s 00:06:57.028 user 0m1.265s 00:06:57.028 sys 0m0.147s 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.028 17:42:31 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.028 ************************************ 00:06:57.028 END TEST accel_dif_generate_copy 00:06:57.028 ************************************ 00:06:57.028 17:42:31 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:57.028 17:42:31 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.028 17:42:31 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:57.028 17:42:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.028 17:42:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.028 ************************************ 00:06:57.028 START TEST accel_comp 00:06:57.028 ************************************ 00:06:57.028 17:42:31 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:57.028 [2024-07-20 17:42:31.544339] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:57.028 [2024-07-20 17:42:31.544403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828335 ] 00:06:57.028 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.028 [2024-07-20 17:42:31.607002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.028 [2024-07-20 17:42:31.699475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.028 17:42:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:58.400 17:42:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.400 00:06:58.400 real 0m1.402s 00:06:58.400 user 0m1.254s 00:06:58.400 sys 0m0.151s 00:06:58.400 17:42:32 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.400 17:42:32 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:58.400 ************************************ 00:06:58.400 END TEST accel_comp 00:06:58.400 ************************************ 00:06:58.400 17:42:32 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.400 17:42:32 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:58.400 17:42:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.400 17:42:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.400 ************************************ 00:06:58.400 START TEST accel_decomp 00:06:58.400 ************************************ 00:06:58.400 17:42:32 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:58.400 17:42:32 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:58.400 [2024-07-20 17:42:32.993156] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:58.400 [2024-07-20 17:42:32.993225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828603 ] 00:06:58.400 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.400 [2024-07-20 17:42:33.054439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.400 [2024-07-20 17:42:33.148748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.658 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.659 17:42:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:59.591 17:42:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.591 00:06:59.591 real 0m1.407s 00:06:59.591 user 0m1.260s 00:06:59.591 sys 0m0.150s 00:06:59.591 17:42:34 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.591 17:42:34 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:59.591 ************************************ 00:06:59.591 END TEST accel_decomp 00:06:59.591 ************************************ 00:06:59.849 17:42:34 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:59.849 17:42:34 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:59.849 17:42:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.849 17:42:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.849 ************************************ 00:06:59.849 START TEST accel_decmop_full 00:06:59.849 ************************************ 00:06:59.849 17:42:34 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:59.849 17:42:34 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:59.849 [2024-07-20 17:42:34.442397] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:59.849 [2024-07-20 17:42:34.442462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828765 ] 00:06:59.849 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.849 [2024-07-20 17:42:34.503561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.849 [2024-07-20 17:42:34.593975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.106 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.107 17:42:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.511 17:42:35 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.511 00:07:01.511 real 0m1.416s 00:07:01.511 user 0m1.279s 00:07:01.511 sys 0m0.139s 00:07:01.511 17:42:35 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.511 17:42:35 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:01.511 ************************************ 00:07:01.511 END TEST accel_decmop_full 00:07:01.511 ************************************ 00:07:01.511 17:42:35 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:01.511 17:42:35 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:01.511 17:42:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.511 17:42:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.511 ************************************ 00:07:01.511 START TEST accel_decomp_mcore 00:07:01.511 ************************************ 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:01.511 17:42:35 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:01.511 [2024-07-20 17:42:35.908266] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:01.511 [2024-07-20 17:42:35.908330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828923 ] 00:07:01.511 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.511 [2024-07-20 17:42:35.971588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.511 [2024-07-20 17:42:36.067471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.511 [2024-07-20 17:42:36.067541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.511 [2024-07-20 17:42:36.067635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.511 [2024-07-20 17:42:36.067638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.511 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.511 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.511 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.511 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.511 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.511 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.511 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.511 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.511 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.511 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.511 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.512 17:42:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.888 00:07:02.888 real 0m1.419s 00:07:02.888 user 0m4.716s 00:07:02.888 sys 0m0.156s 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.888 17:42:37 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:02.888 ************************************ 00:07:02.888 END TEST accel_decomp_mcore 00:07:02.888 ************************************ 00:07:02.888 17:42:37 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.888 17:42:37 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:02.888 17:42:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.888 17:42:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.888 ************************************ 00:07:02.888 START TEST accel_decomp_full_mcore 00:07:02.888 ************************************ 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:02.888 [2024-07-20 17:42:37.374175] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:02.888 [2024-07-20 17:42:37.374239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid829088 ] 00:07:02.888 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.888 [2024-07-20 17:42:37.436314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.888 [2024-07-20 17:42:37.530658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.888 [2024-07-20 17:42:37.530711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.888 [2024-07-20 17:42:37.530823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.888 [2024-07-20 17:42:37.530826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.888 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.889 17:42:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.265 00:07:04.265 real 0m1.428s 00:07:04.265 user 0m4.758s 00:07:04.265 sys 0m0.161s 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.265 17:42:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:04.265 ************************************ 00:07:04.265 END TEST accel_decomp_full_mcore 00:07:04.265 ************************************ 00:07:04.265 17:42:38 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.265 17:42:38 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:04.265 17:42:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.265 17:42:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.265 ************************************ 00:07:04.265 START TEST accel_decomp_mthread 00:07:04.265 ************************************ 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:04.265 17:42:38 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:04.265 [2024-07-20 17:42:38.845844] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:04.265 [2024-07-20 17:42:38.845907] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid829357 ] 00:07:04.265 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.265 [2024-07-20 17:42:38.907498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.265 [2024-07-20 17:42:38.998018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.265 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.265 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.265 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.265 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.265 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 17:42:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.461 00:07:05.461 real 0m1.402s 00:07:05.461 user 0m1.260s 00:07:05.461 sys 0m0.145s 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.461 17:42:40 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:05.461 ************************************ 00:07:05.461 END TEST accel_decomp_mthread 00:07:05.461 ************************************ 00:07:05.461 17:42:40 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.461 17:42:40 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:05.461 17:42:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.461 17:42:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.720 ************************************ 00:07:05.720 START TEST accel_decomp_full_mthread 00:07:05.720 ************************************ 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:05.720 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:05.720 [2024-07-20 17:42:40.296415] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:05.720 [2024-07-20 17:42:40.296474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid829523 ] 00:07:05.720 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.720 [2024-07-20 17:42:40.359282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.720 [2024-07-20 17:42:40.451940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.978 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.978 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.979 17:42:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.353 00:07:07.353 real 0m1.444s 00:07:07.353 user 0m1.294s 00:07:07.353 sys 0m0.153s 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.353 17:42:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:07.353 ************************************ 00:07:07.353 END TEST accel_decomp_full_mthread 00:07:07.353 ************************************ 00:07:07.353 17:42:41 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:07.353 17:42:41 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:07.353 17:42:41 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:07.353 17:42:41 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.353 17:42:41 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:07.353 17:42:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.353 17:42:41 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.353 17:42:41 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.353 17:42:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.353 17:42:41 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.353 17:42:41 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.353 17:42:41 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:07.353 17:42:41 accel -- accel/accel.sh@41 -- # jq -r . 00:07:07.353 ************************************ 00:07:07.353 START TEST accel_dif_functional_tests 00:07:07.353 ************************************ 00:07:07.353 17:42:41 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:07.353 [2024-07-20 17:42:41.807481] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:07.353 [2024-07-20 17:42:41.807540] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid829679 ] 00:07:07.353 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.353 [2024-07-20 17:42:41.868770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.353 [2024-07-20 17:42:41.963984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.353 [2024-07-20 17:42:41.964012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.353 [2024-07-20 17:42:41.964015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.353 00:07:07.353 00:07:07.353 CUnit - A unit testing framework for C - Version 2.1-3 00:07:07.353 http://cunit.sourceforge.net/ 00:07:07.353 00:07:07.353 00:07:07.353 Suite: accel_dif 00:07:07.353 Test: verify: DIF generated, GUARD check ...passed 00:07:07.353 Test: verify: DIF generated, APPTAG check ...passed 00:07:07.353 Test: verify: DIF generated, REFTAG check ...passed 00:07:07.353 Test: verify: DIF not generated, GUARD check ...[2024-07-20 17:42:42.050833] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:07.353 passed 00:07:07.353 Test: verify: DIF not generated, APPTAG check ...[2024-07-20 17:42:42.050908] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:07.353 passed 00:07:07.353 Test: verify: DIF not generated, REFTAG check ...[2024-07-20 17:42:42.050941] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:07.353 passed 00:07:07.353 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:07.353 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-20 17:42:42.051003] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:07.353 passed 00:07:07.353 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:07.353 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:07.353 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:07.353 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-20 17:42:42.051147] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:07.353 passed 00:07:07.353 Test: verify copy: DIF generated, GUARD check ...passed 00:07:07.353 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:07.353 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:07.353 Test: verify copy: DIF not generated, GUARD check ...[2024-07-20 17:42:42.051297] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:07.353 passed 00:07:07.353 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-20 17:42:42.051332] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:07.353 passed 00:07:07.353 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-20 17:42:42.051364] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:07.353 passed 00:07:07.353 Test: generate copy: DIF generated, GUARD check ...passed 00:07:07.353 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:07.353 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:07.353 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:07.353 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:07.353 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:07.353 Test: generate copy: iovecs-len validate ...[2024-07-20 17:42:42.051572] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:07.353 passed 00:07:07.353 Test: generate copy: buffer alignment validate ...passed 00:07:07.353 00:07:07.353 Run Summary: Type Total Ran Passed Failed Inactive 00:07:07.353 suites 1 1 n/a 0 0 00:07:07.353 tests 26 26 26 0 0 00:07:07.353 asserts 115 115 115 0 n/a 00:07:07.353 00:07:07.353 Elapsed time = 0.002 seconds 00:07:07.611 00:07:07.611 real 0m0.494s 00:07:07.611 user 0m0.765s 00:07:07.611 sys 0m0.171s 00:07:07.611 17:42:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.611 17:42:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:07.611 ************************************ 00:07:07.611 END TEST accel_dif_functional_tests 00:07:07.611 ************************************ 00:07:07.611 00:07:07.611 real 0m31.695s 00:07:07.611 user 0m35.079s 00:07:07.611 sys 0m4.611s 00:07:07.611 17:42:42 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.611 17:42:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.611 ************************************ 00:07:07.611 END TEST accel 00:07:07.611 ************************************ 00:07:07.611 17:42:42 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:07.611 17:42:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:07.611 17:42:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.611 17:42:42 -- common/autotest_common.sh@10 -- # set +x 00:07:07.611 ************************************ 00:07:07.611 START TEST accel_rpc 00:07:07.611 ************************************ 00:07:07.611 17:42:42 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:07.611 * Looking for test storage... 00:07:07.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:07.611 17:42:42 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:07.611 17:42:42 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=829863 00:07:07.611 17:42:42 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:07.611 17:42:42 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 829863 00:07:07.611 17:42:42 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 829863 ']' 00:07:07.611 17:42:42 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.611 17:42:42 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:07.611 17:42:42 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.611 17:42:42 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:07.611 17:42:42 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.869 [2024-07-20 17:42:42.437512] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:07.869 [2024-07-20 17:42:42.437615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid829863 ] 00:07:07.869 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.869 [2024-07-20 17:42:42.502514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.869 [2024-07-20 17:42:42.593301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.869 17:42:42 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:07.869 17:42:42 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:07.869 17:42:42 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:07.869 17:42:42 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:07.869 17:42:42 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:07.869 17:42:42 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:07.869 17:42:42 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:07.869 17:42:42 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:07.869 17:42:42 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.869 17:42:42 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.128 ************************************ 00:07:08.128 START TEST accel_assign_opcode 00:07:08.128 ************************************ 00:07:08.128 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:08.128 17:42:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:08.128 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.128 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.128 [2024-07-20 17:42:42.674005] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:08.128 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.128 17:42:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:08.128 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.128 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.128 [2024-07-20 17:42:42.682001] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:08.128 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.128 17:42:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:08.128 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.128 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.385 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.385 17:42:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:08.385 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.385 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.385 17:42:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:08.385 17:42:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:08.385 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.385 software 00:07:08.385 00:07:08.385 real 0m0.294s 00:07:08.385 user 0m0.040s 00:07:08.385 sys 0m0.008s 00:07:08.385 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.385 17:42:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.385 ************************************ 00:07:08.385 END TEST accel_assign_opcode 00:07:08.385 ************************************ 00:07:08.385 17:42:42 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 829863 00:07:08.385 17:42:42 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 829863 ']' 00:07:08.385 17:42:42 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 829863 00:07:08.385 17:42:42 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:08.385 17:42:42 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:08.385 17:42:42 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 829863 00:07:08.385 17:42:43 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:08.385 17:42:43 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:08.385 17:42:43 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 829863' 00:07:08.385 killing process with pid 829863 00:07:08.385 17:42:43 accel_rpc -- common/autotest_common.sh@965 -- # kill 829863 00:07:08.385 17:42:43 accel_rpc -- common/autotest_common.sh@970 -- # wait 829863 00:07:08.643 00:07:08.643 real 0m1.080s 00:07:08.643 user 0m1.011s 00:07:08.643 sys 0m0.430s 00:07:08.643 17:42:43 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.643 17:42:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.643 ************************************ 00:07:08.643 END TEST accel_rpc 00:07:08.643 ************************************ 00:07:08.643 17:42:43 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:08.901 17:42:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:08.901 17:42:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.901 17:42:43 -- common/autotest_common.sh@10 -- # set +x 00:07:08.901 ************************************ 00:07:08.901 START TEST app_cmdline 00:07:08.901 ************************************ 00:07:08.901 17:42:43 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:08.901 * Looking for test storage... 00:07:08.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:08.901 17:42:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:08.901 17:42:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=830067 00:07:08.901 17:42:43 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:08.901 17:42:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 830067 00:07:08.901 17:42:43 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 830067 ']' 00:07:08.901 17:42:43 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.901 17:42:43 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:08.901 17:42:43 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.901 17:42:43 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:08.901 17:42:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.901 [2024-07-20 17:42:43.567705] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:08.901 [2024-07-20 17:42:43.567822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830067 ] 00:07:08.901 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.901 [2024-07-20 17:42:43.629887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.159 [2024-07-20 17:42:43.723366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.417 17:42:43 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.417 17:42:43 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:09.417 17:42:43 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:09.417 { 00:07:09.417 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:09.417 "fields": { 00:07:09.417 "major": 24, 00:07:09.417 "minor": 5, 00:07:09.417 "patch": 1, 00:07:09.417 "suffix": "-pre", 00:07:09.417 "commit": "5fa2f5086" 00:07:09.417 } 00:07:09.417 } 00:07:09.675 17:42:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:09.675 17:42:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:09.675 17:42:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:09.675 17:42:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:09.675 17:42:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:09.675 17:42:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.675 17:42:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.675 17:42:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:09.675 17:42:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:09.675 17:42:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:09.675 17:42:44 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.933 request: 00:07:09.933 { 00:07:09.933 "method": "env_dpdk_get_mem_stats", 00:07:09.933 "req_id": 1 00:07:09.933 } 00:07:09.933 Got JSON-RPC error response 00:07:09.933 response: 00:07:09.933 { 00:07:09.933 "code": -32601, 00:07:09.933 "message": "Method not found" 00:07:09.933 } 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.933 17:42:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 830067 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 830067 ']' 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 830067 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 830067 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 830067' 00:07:09.933 killing process with pid 830067 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@965 -- # kill 830067 00:07:09.933 17:42:44 app_cmdline -- common/autotest_common.sh@970 -- # wait 830067 00:07:10.192 00:07:10.192 real 0m1.461s 00:07:10.192 user 0m1.785s 00:07:10.192 sys 0m0.473s 00:07:10.192 17:42:44 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.192 17:42:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.192 ************************************ 00:07:10.192 END TEST app_cmdline 00:07:10.192 ************************************ 00:07:10.192 17:42:44 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:10.192 17:42:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.192 17:42:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.192 17:42:44 -- common/autotest_common.sh@10 -- # set +x 00:07:10.192 ************************************ 00:07:10.192 START TEST version 00:07:10.192 ************************************ 00:07:10.192 17:42:44 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:10.449 * Looking for test storage... 00:07:10.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:10.449 17:42:45 version -- app/version.sh@17 -- # get_header_version major 00:07:10.449 17:42:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.449 17:42:45 version -- app/version.sh@14 -- # cut -f2 00:07:10.449 17:42:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.449 17:42:45 version -- app/version.sh@17 -- # major=24 00:07:10.449 17:42:45 version -- app/version.sh@18 -- # get_header_version minor 00:07:10.450 17:42:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.450 17:42:45 version -- app/version.sh@14 -- # cut -f2 00:07:10.450 17:42:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.450 17:42:45 version -- app/version.sh@18 -- # minor=5 00:07:10.450 17:42:45 version -- app/version.sh@19 -- # get_header_version patch 00:07:10.450 17:42:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.450 17:42:45 version -- app/version.sh@14 -- # cut -f2 00:07:10.450 17:42:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.450 17:42:45 version -- app/version.sh@19 -- # patch=1 00:07:10.450 17:42:45 version -- app/version.sh@20 -- # get_header_version suffix 00:07:10.450 17:42:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.450 17:42:45 version -- app/version.sh@14 -- # cut -f2 00:07:10.450 17:42:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.450 17:42:45 version -- app/version.sh@20 -- # suffix=-pre 00:07:10.450 17:42:45 version -- app/version.sh@22 -- # version=24.5 00:07:10.450 17:42:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:10.450 17:42:45 version -- app/version.sh@25 -- # version=24.5.1 00:07:10.450 17:42:45 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:10.450 17:42:45 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:10.450 17:42:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:10.450 17:42:45 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:10.450 17:42:45 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:10.450 00:07:10.450 real 0m0.105s 00:07:10.450 user 0m0.053s 00:07:10.450 sys 0m0.072s 00:07:10.450 17:42:45 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.450 17:42:45 version -- common/autotest_common.sh@10 -- # set +x 00:07:10.450 ************************************ 00:07:10.450 END TEST version 00:07:10.450 ************************************ 00:07:10.450 17:42:45 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:10.450 17:42:45 -- spdk/autotest.sh@198 -- # uname -s 00:07:10.450 17:42:45 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:10.450 17:42:45 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:10.450 17:42:45 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:10.450 17:42:45 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:10.450 17:42:45 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:10.450 17:42:45 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:10.450 17:42:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:10.450 17:42:45 -- common/autotest_common.sh@10 -- # set +x 00:07:10.450 17:42:45 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:10.450 17:42:45 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:10.450 17:42:45 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:10.450 17:42:45 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:10.450 17:42:45 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:10.450 17:42:45 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:10.450 17:42:45 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:10.450 17:42:45 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:10.450 17:42:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.450 17:42:45 -- common/autotest_common.sh@10 -- # set +x 00:07:10.450 ************************************ 00:07:10.450 START TEST nvmf_tcp 00:07:10.450 ************************************ 00:07:10.450 17:42:45 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:10.450 * Looking for test storage... 00:07:10.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.450 17:42:45 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.450 17:42:45 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.450 17:42:45 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.450 17:42:45 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.450 17:42:45 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.450 17:42:45 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.450 17:42:45 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:10.450 17:42:45 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:10.450 17:42:45 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:10.450 17:42:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:10.450 17:42:45 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:10.450 17:42:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:10.450 17:42:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.450 17:42:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.450 ************************************ 00:07:10.450 START TEST nvmf_example 00:07:10.450 ************************************ 00:07:10.450 17:42:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:10.718 * Looking for test storage... 00:07:10.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.718 17:42:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.719 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:10.719 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:10.719 17:42:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:10.719 17:42:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:12.620 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:12.620 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:12.620 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:12.620 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:12.620 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:12.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:07:12.621 00:07:12.621 --- 10.0.0.2 ping statistics --- 00:07:12.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.621 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:07:12.621 00:07:12.621 --- 10.0.0.1 ping statistics --- 00:07:12.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.621 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=831978 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 831978 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 831978 ']' 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:12.621 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:12.879 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.136 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.137 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.137 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.137 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.137 17:42:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.137 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:13.137 17:42:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:13.137 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.191 Initializing NVMe Controllers 00:07:23.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:23.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:23.191 Initialization complete. Launching workers. 00:07:23.191 ======================================================== 00:07:23.191 Latency(us) 00:07:23.191 Device Information : IOPS MiB/s Average min max 00:07:23.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13286.00 51.90 4817.86 900.31 15256.31 00:07:23.191 ======================================================== 00:07:23.191 Total : 13286.00 51.90 4817.86 900.31 15256.31 00:07:23.191 00:07:23.191 17:42:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:23.191 17:42:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:23.191 17:42:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:23.191 17:42:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:23.191 17:42:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:23.191 17:42:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:23.191 17:42:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:23.191 17:42:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:23.191 rmmod nvme_tcp 00:07:23.449 rmmod nvme_fabrics 00:07:23.449 rmmod nvme_keyring 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 831978 ']' 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 831978 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 831978 ']' 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 831978 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 831978 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 831978' 00:07:23.449 killing process with pid 831978 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 831978 00:07:23.449 17:42:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 831978 00:07:23.707 nvmf threads initialize successfully 00:07:23.707 bdev subsystem init successfully 00:07:23.707 created a nvmf target service 00:07:23.707 create targets's poll groups done 00:07:23.707 all subsystems of target started 00:07:23.707 nvmf target is running 00:07:23.707 all subsystems of target stopped 00:07:23.707 destroy targets's poll groups done 00:07:23.707 destroyed the nvmf target service 00:07:23.707 bdev subsystem finish successfully 00:07:23.707 nvmf threads destroy successfully 00:07:23.707 17:42:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:23.707 17:42:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:23.707 17:42:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:23.707 17:42:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:23.707 17:42:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:23.707 17:42:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.707 17:42:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.707 17:42:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.608 17:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:25.608 17:43:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:25.608 17:43:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.608 17:43:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:25.608 00:07:25.608 real 0m15.116s 00:07:25.608 user 0m42.258s 00:07:25.608 sys 0m3.081s 00:07:25.608 17:43:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.608 17:43:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:25.608 ************************************ 00:07:25.608 END TEST nvmf_example 00:07:25.608 ************************************ 00:07:25.608 17:43:00 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:25.608 17:43:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:25.608 17:43:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.608 17:43:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.608 ************************************ 00:07:25.608 START TEST nvmf_filesystem 00:07:25.608 ************************************ 00:07:25.608 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:25.876 * Looking for test storage... 00:07:25.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:25.876 #define SPDK_CONFIG_H 00:07:25.876 #define SPDK_CONFIG_APPS 1 00:07:25.876 #define SPDK_CONFIG_ARCH native 00:07:25.876 #undef SPDK_CONFIG_ASAN 00:07:25.876 #undef SPDK_CONFIG_AVAHI 00:07:25.876 #undef SPDK_CONFIG_CET 00:07:25.876 #define SPDK_CONFIG_COVERAGE 1 00:07:25.876 #define SPDK_CONFIG_CROSS_PREFIX 00:07:25.876 #undef SPDK_CONFIG_CRYPTO 00:07:25.876 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:25.876 #undef SPDK_CONFIG_CUSTOMOCF 00:07:25.876 #undef SPDK_CONFIG_DAOS 00:07:25.876 #define SPDK_CONFIG_DAOS_DIR 00:07:25.876 #define SPDK_CONFIG_DEBUG 1 00:07:25.876 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:25.876 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:25.876 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:25.876 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:25.876 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:25.876 #undef SPDK_CONFIG_DPDK_UADK 00:07:25.876 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:25.876 #define SPDK_CONFIG_EXAMPLES 1 00:07:25.876 #undef SPDK_CONFIG_FC 00:07:25.876 #define SPDK_CONFIG_FC_PATH 00:07:25.876 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:25.876 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:25.876 #undef SPDK_CONFIG_FUSE 00:07:25.876 #undef SPDK_CONFIG_FUZZER 00:07:25.876 #define SPDK_CONFIG_FUZZER_LIB 00:07:25.876 #undef SPDK_CONFIG_GOLANG 00:07:25.876 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:25.876 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:25.876 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:25.876 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:25.876 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:25.876 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:25.876 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:25.876 #define SPDK_CONFIG_IDXD 1 00:07:25.876 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:25.876 #undef SPDK_CONFIG_IPSEC_MB 00:07:25.876 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:25.876 #define SPDK_CONFIG_ISAL 1 00:07:25.876 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:25.876 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:25.876 #define SPDK_CONFIG_LIBDIR 00:07:25.876 #undef SPDK_CONFIG_LTO 00:07:25.876 #define SPDK_CONFIG_MAX_LCORES 00:07:25.876 #define SPDK_CONFIG_NVME_CUSE 1 00:07:25.876 #undef SPDK_CONFIG_OCF 00:07:25.876 #define SPDK_CONFIG_OCF_PATH 00:07:25.876 #define SPDK_CONFIG_OPENSSL_PATH 00:07:25.876 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:25.876 #define SPDK_CONFIG_PGO_DIR 00:07:25.876 #undef SPDK_CONFIG_PGO_USE 00:07:25.876 #define SPDK_CONFIG_PREFIX /usr/local 00:07:25.876 #undef SPDK_CONFIG_RAID5F 00:07:25.876 #undef SPDK_CONFIG_RBD 00:07:25.876 #define SPDK_CONFIG_RDMA 1 00:07:25.876 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:25.876 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:25.876 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:25.876 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:25.876 #define SPDK_CONFIG_SHARED 1 00:07:25.876 #undef SPDK_CONFIG_SMA 00:07:25.876 #define SPDK_CONFIG_TESTS 1 00:07:25.876 #undef SPDK_CONFIG_TSAN 00:07:25.876 #define SPDK_CONFIG_UBLK 1 00:07:25.876 #define SPDK_CONFIG_UBSAN 1 00:07:25.876 #undef SPDK_CONFIG_UNIT_TESTS 00:07:25.876 #undef SPDK_CONFIG_URING 00:07:25.876 #define SPDK_CONFIG_URING_PATH 00:07:25.876 #undef SPDK_CONFIG_URING_ZNS 00:07:25.876 #undef SPDK_CONFIG_USDT 00:07:25.876 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:25.876 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:25.876 #define SPDK_CONFIG_VFIO_USER 1 00:07:25.876 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:25.876 #define SPDK_CONFIG_VHOST 1 00:07:25.876 #define SPDK_CONFIG_VIRTIO 1 00:07:25.876 #undef SPDK_CONFIG_VTUNE 00:07:25.876 #define SPDK_CONFIG_VTUNE_DIR 00:07:25.876 #define SPDK_CONFIG_WERROR 1 00:07:25.876 #define SPDK_CONFIG_WPDK_DIR 00:07:25.876 #undef SPDK_CONFIG_XNVME 00:07:25.876 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.876 17:43:00 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:25.877 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 833670 ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 833670 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.dDa5NY 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.dDa5NY/tests/target /tmp/spdk.dDa5NY 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=953643008 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4330786816 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=53449342976 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994721280 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8545378304 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30941724672 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997360640 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390182912 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398944256 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30995251200 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997360640 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=2109440 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199468032 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199472128 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:25.878 * Looking for test storage... 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=53449342976 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=10759970816 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.878 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.879 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.879 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:25.879 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:25.879 17:43:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:25.879 17:43:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:28.416 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:28.416 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:28.416 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:28.416 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.416 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:07:28.417 00:07:28.417 --- 10.0.0.2 ping statistics --- 00:07:28.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.417 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:07:28.417 00:07:28.417 --- 10.0.0.1 ping statistics --- 00:07:28.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.417 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.417 ************************************ 00:07:28.417 START TEST nvmf_filesystem_no_in_capsule 00:07:28.417 ************************************ 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=835301 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 835301 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 835301 ']' 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:28.417 17:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.417 [2024-07-20 17:43:02.958384] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:28.417 [2024-07-20 17:43:02.958478] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.417 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.417 [2024-07-20 17:43:03.029821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.417 [2024-07-20 17:43:03.126223] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.417 [2024-07-20 17:43:03.126274] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.417 [2024-07-20 17:43:03.126301] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.417 [2024-07-20 17:43:03.126315] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.417 [2024-07-20 17:43:03.126327] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.417 [2024-07-20 17:43:03.126412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.417 [2024-07-20 17:43:03.126469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.417 [2024-07-20 17:43:03.126523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.417 [2024-07-20 17:43:03.126521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.678 [2024-07-20 17:43:03.286332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.678 Malloc1 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.678 [2024-07-20 17:43:03.461379] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.678 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.934 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.935 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:28.935 { 00:07:28.935 "name": "Malloc1", 00:07:28.935 "aliases": [ 00:07:28.935 "26a62a00-040b-420a-a706-aff25ca42ce1" 00:07:28.935 ], 00:07:28.935 "product_name": "Malloc disk", 00:07:28.935 "block_size": 512, 00:07:28.935 "num_blocks": 1048576, 00:07:28.935 "uuid": "26a62a00-040b-420a-a706-aff25ca42ce1", 00:07:28.935 "assigned_rate_limits": { 00:07:28.935 "rw_ios_per_sec": 0, 00:07:28.935 "rw_mbytes_per_sec": 0, 00:07:28.935 "r_mbytes_per_sec": 0, 00:07:28.935 "w_mbytes_per_sec": 0 00:07:28.935 }, 00:07:28.935 "claimed": true, 00:07:28.935 "claim_type": "exclusive_write", 00:07:28.935 "zoned": false, 00:07:28.935 "supported_io_types": { 00:07:28.935 "read": true, 00:07:28.935 "write": true, 00:07:28.935 "unmap": true, 00:07:28.935 "write_zeroes": true, 00:07:28.935 "flush": true, 00:07:28.935 "reset": true, 00:07:28.935 "compare": false, 00:07:28.935 "compare_and_write": false, 00:07:28.935 "abort": true, 00:07:28.935 "nvme_admin": false, 00:07:28.935 "nvme_io": false 00:07:28.935 }, 00:07:28.935 "memory_domains": [ 00:07:28.935 { 00:07:28.935 "dma_device_id": "system", 00:07:28.935 "dma_device_type": 1 00:07:28.935 }, 00:07:28.935 { 00:07:28.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.935 "dma_device_type": 2 00:07:28.935 } 00:07:28.935 ], 00:07:28.935 "driver_specific": {} 00:07:28.935 } 00:07:28.935 ]' 00:07:28.935 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:28.935 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:28.935 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:28.935 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:28.935 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:28.935 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:28.935 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:28.935 17:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.499 17:43:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:29.499 17:43:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:29.499 17:43:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:29.499 17:43:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:29.499 17:43:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:31.393 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:31.957 17:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:32.889 17:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:33.821 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:33.821 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.822 ************************************ 00:07:33.822 START TEST filesystem_ext4 00:07:33.822 ************************************ 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:33.822 17:43:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:33.822 mke2fs 1.46.5 (30-Dec-2021) 00:07:34.078 Discarding device blocks: 0/522240 done 00:07:34.078 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:34.078 Filesystem UUID: a8a98cfc-dc60-47d8-a780-3d03cac348d9 00:07:34.078 Superblock backups stored on blocks: 00:07:34.078 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:34.078 00:07:34.078 Allocating group tables: 0/64 done 00:07:34.078 Writing inode tables: 0/64 done 00:07:34.335 Creating journal (8192 blocks): done 00:07:35.265 Writing superblocks and filesystem accounting information: 0/64 done 00:07:35.265 00:07:35.265 17:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:35.265 17:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:35.265 17:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:35.265 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:35.265 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:35.265 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:35.265 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:35.265 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:35.265 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 835301 00:07:35.265 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:35.265 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:35.265 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:35.265 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:35.265 00:07:35.265 real 0m1.483s 00:07:35.265 user 0m0.013s 00:07:35.265 sys 0m0.036s 00:07:35.265 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.265 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:35.265 ************************************ 00:07:35.265 END TEST filesystem_ext4 00:07:35.265 ************************************ 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.522 ************************************ 00:07:35.522 START TEST filesystem_btrfs 00:07:35.522 ************************************ 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:35.522 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:36.085 btrfs-progs v6.6.2 00:07:36.085 See https://btrfs.readthedocs.io for more information. 00:07:36.085 00:07:36.085 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:36.085 NOTE: several default settings have changed in version 5.15, please make sure 00:07:36.085 this does not affect your deployments: 00:07:36.085 - DUP for metadata (-m dup) 00:07:36.085 - enabled no-holes (-O no-holes) 00:07:36.085 - enabled free-space-tree (-R free-space-tree) 00:07:36.085 00:07:36.085 Label: (null) 00:07:36.085 UUID: 78203328-4c53-43e6-bf65-ea0ecb8fd557 00:07:36.085 Node size: 16384 00:07:36.085 Sector size: 4096 00:07:36.085 Filesystem size: 510.00MiB 00:07:36.085 Block group profiles: 00:07:36.085 Data: single 8.00MiB 00:07:36.085 Metadata: DUP 32.00MiB 00:07:36.085 System: DUP 8.00MiB 00:07:36.085 SSD detected: yes 00:07:36.085 Zoned device: no 00:07:36.085 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:36.085 Runtime features: free-space-tree 00:07:36.085 Checksum: crc32c 00:07:36.085 Number of devices: 1 00:07:36.085 Devices: 00:07:36.085 ID SIZE PATH 00:07:36.085 1 510.00MiB /dev/nvme0n1p1 00:07:36.085 00:07:36.085 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:36.085 17:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 835301 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:36.650 00:07:36.650 real 0m1.273s 00:07:36.650 user 0m0.013s 00:07:36.650 sys 0m0.047s 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:36.650 ************************************ 00:07:36.650 END TEST filesystem_btrfs 00:07:36.650 ************************************ 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.650 ************************************ 00:07:36.650 START TEST filesystem_xfs 00:07:36.650 ************************************ 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:36.650 17:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:36.907 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:36.907 = sectsz=512 attr=2, projid32bit=1 00:07:36.907 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:36.907 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:36.907 data = bsize=4096 blocks=130560, imaxpct=25 00:07:36.907 = sunit=0 swidth=0 blks 00:07:36.907 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:36.907 log =internal log bsize=4096 blocks=16384, version=2 00:07:36.907 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:36.907 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:37.844 Discarding blocks...Done. 00:07:37.844 17:43:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:37.844 17:43:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:40.417 17:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:40.417 17:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:40.417 17:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:40.417 17:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:40.417 17:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:40.417 17:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:40.418 17:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 835301 00:07:40.418 17:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:40.418 17:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:40.418 17:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:40.418 17:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:40.418 00:07:40.418 real 0m3.572s 00:07:40.418 user 0m0.011s 00:07:40.418 sys 0m0.035s 00:07:40.418 17:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.418 17:43:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:40.418 ************************************ 00:07:40.418 END TEST filesystem_xfs 00:07:40.418 ************************************ 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:40.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 835301 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 835301 ']' 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 835301 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 835301 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 835301' 00:07:40.418 killing process with pid 835301 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 835301 00:07:40.418 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 835301 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:40.982 00:07:40.982 real 0m12.749s 00:07:40.982 user 0m48.855s 00:07:40.982 sys 0m1.718s 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.982 ************************************ 00:07:40.982 END TEST nvmf_filesystem_no_in_capsule 00:07:40.982 ************************************ 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.982 ************************************ 00:07:40.982 START TEST nvmf_filesystem_in_capsule 00:07:40.982 ************************************ 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=837004 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 837004 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 837004 ']' 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:40.982 17:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.982 [2024-07-20 17:43:15.758650] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:40.982 [2024-07-20 17:43:15.758742] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.239 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.239 [2024-07-20 17:43:15.824493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.239 [2024-07-20 17:43:15.913051] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.239 [2024-07-20 17:43:15.913128] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.239 [2024-07-20 17:43:15.913141] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.239 [2024-07-20 17:43:15.913153] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.239 [2024-07-20 17:43:15.913163] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.239 [2024-07-20 17:43:15.913224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.239 [2024-07-20 17:43:15.913286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.240 [2024-07-20 17:43:15.913351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.240 [2024-07-20 17:43:15.913354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.497 [2024-07-20 17:43:16.065614] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.497 Malloc1 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.497 [2024-07-20 17:43:16.233094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:41.497 { 00:07:41.497 "name": "Malloc1", 00:07:41.497 "aliases": [ 00:07:41.497 "908be7a2-85f0-4e2a-a98e-6c12c57fc8bd" 00:07:41.497 ], 00:07:41.497 "product_name": "Malloc disk", 00:07:41.497 "block_size": 512, 00:07:41.497 "num_blocks": 1048576, 00:07:41.497 "uuid": "908be7a2-85f0-4e2a-a98e-6c12c57fc8bd", 00:07:41.497 "assigned_rate_limits": { 00:07:41.497 "rw_ios_per_sec": 0, 00:07:41.497 "rw_mbytes_per_sec": 0, 00:07:41.497 "r_mbytes_per_sec": 0, 00:07:41.497 "w_mbytes_per_sec": 0 00:07:41.497 }, 00:07:41.497 "claimed": true, 00:07:41.497 "claim_type": "exclusive_write", 00:07:41.497 "zoned": false, 00:07:41.497 "supported_io_types": { 00:07:41.497 "read": true, 00:07:41.497 "write": true, 00:07:41.497 "unmap": true, 00:07:41.497 "write_zeroes": true, 00:07:41.497 "flush": true, 00:07:41.497 "reset": true, 00:07:41.497 "compare": false, 00:07:41.497 "compare_and_write": false, 00:07:41.497 "abort": true, 00:07:41.497 "nvme_admin": false, 00:07:41.497 "nvme_io": false 00:07:41.497 }, 00:07:41.497 "memory_domains": [ 00:07:41.497 { 00:07:41.497 "dma_device_id": "system", 00:07:41.497 "dma_device_type": 1 00:07:41.497 }, 00:07:41.497 { 00:07:41.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.497 "dma_device_type": 2 00:07:41.497 } 00:07:41.497 ], 00:07:41.497 "driver_specific": {} 00:07:41.497 } 00:07:41.497 ]' 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:41.497 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:41.754 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:41.754 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:41.754 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:41.754 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:41.754 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:41.754 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:42.318 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:42.318 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:42.318 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:42.318 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:42.318 17:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:44.216 17:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:44.216 17:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:44.216 17:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:44.216 17:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:44.216 17:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:44.216 17:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:44.216 17:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:44.216 17:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:44.216 17:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:44.216 17:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:44.216 17:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:44.216 17:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:44.216 17:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:44.216 17:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:44.216 17:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:44.216 17:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:44.216 17:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:44.781 17:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:45.038 17:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.410 ************************************ 00:07:46.410 START TEST filesystem_in_capsule_ext4 00:07:46.410 ************************************ 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:46.410 17:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:46.410 mke2fs 1.46.5 (30-Dec-2021) 00:07:46.410 Discarding device blocks: 0/522240 done 00:07:46.410 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:46.410 Filesystem UUID: f58bfab3-80c0-480b-8ab9-05dcbfc092c6 00:07:46.410 Superblock backups stored on blocks: 00:07:46.410 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:46.410 00:07:46.411 Allocating group tables: 0/64 done 00:07:46.411 Writing inode tables: 0/64 done 00:07:46.411 Creating journal (8192 blocks): done 00:07:46.411 Writing superblocks and filesystem accounting information: 0/64 done 00:07:46.411 00:07:46.411 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:46.411 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:46.976 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:46.976 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:46.976 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:46.976 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:46.976 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:46.976 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:46.976 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 837004 00:07:46.976 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:46.976 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:46.976 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:46.976 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:46.976 00:07:46.976 real 0m0.906s 00:07:46.976 user 0m0.013s 00:07:46.976 sys 0m0.035s 00:07:46.976 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.976 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:46.976 ************************************ 00:07:46.976 END TEST filesystem_in_capsule_ext4 00:07:46.976 ************************************ 00:07:47.234 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:47.234 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:47.234 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.234 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.234 ************************************ 00:07:47.234 START TEST filesystem_in_capsule_btrfs 00:07:47.234 ************************************ 00:07:47.234 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:47.234 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:47.234 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.234 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:47.234 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:47.234 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:47.234 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:47.234 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:47.235 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:47.235 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:47.235 17:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:47.492 btrfs-progs v6.6.2 00:07:47.492 See https://btrfs.readthedocs.io for more information. 00:07:47.492 00:07:47.492 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:47.492 NOTE: several default settings have changed in version 5.15, please make sure 00:07:47.492 this does not affect your deployments: 00:07:47.492 - DUP for metadata (-m dup) 00:07:47.492 - enabled no-holes (-O no-holes) 00:07:47.492 - enabled free-space-tree (-R free-space-tree) 00:07:47.492 00:07:47.492 Label: (null) 00:07:47.492 UUID: b7c328b1-82ef-4290-b473-54ff8b1ff7c8 00:07:47.492 Node size: 16384 00:07:47.492 Sector size: 4096 00:07:47.492 Filesystem size: 510.00MiB 00:07:47.492 Block group profiles: 00:07:47.492 Data: single 8.00MiB 00:07:47.492 Metadata: DUP 32.00MiB 00:07:47.492 System: DUP 8.00MiB 00:07:47.492 SSD detected: yes 00:07:47.492 Zoned device: no 00:07:47.492 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:47.492 Runtime features: free-space-tree 00:07:47.492 Checksum: crc32c 00:07:47.492 Number of devices: 1 00:07:47.492 Devices: 00:07:47.492 ID SIZE PATH 00:07:47.492 1 510.00MiB /dev/nvme0n1p1 00:07:47.492 00:07:47.492 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:47.492 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.056 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.056 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 837004 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.313 00:07:48.313 real 0m1.086s 00:07:48.313 user 0m0.013s 00:07:48.313 sys 0m0.045s 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:48.313 ************************************ 00:07:48.313 END TEST filesystem_in_capsule_btrfs 00:07:48.313 ************************************ 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.313 ************************************ 00:07:48.313 START TEST filesystem_in_capsule_xfs 00:07:48.313 ************************************ 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:48.313 17:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:48.313 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:48.313 = sectsz=512 attr=2, projid32bit=1 00:07:48.313 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:48.313 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:48.313 data = bsize=4096 blocks=130560, imaxpct=25 00:07:48.313 = sunit=0 swidth=0 blks 00:07:48.313 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:48.313 log =internal log bsize=4096 blocks=16384, version=2 00:07:48.313 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:48.313 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:49.243 Discarding blocks...Done. 00:07:49.243 17:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:49.243 17:43:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 837004 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:51.155 00:07:51.155 real 0m2.828s 00:07:51.155 user 0m0.014s 00:07:51.155 sys 0m0.042s 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:51.155 ************************************ 00:07:51.155 END TEST filesystem_in_capsule_xfs 00:07:51.155 ************************************ 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:51.155 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:51.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.412 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:51.412 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:51.412 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:51.412 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.412 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:51.412 17:43:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 837004 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 837004 ']' 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 837004 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 837004 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 837004' 00:07:51.412 killing process with pid 837004 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 837004 00:07:51.412 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 837004 00:07:51.980 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:51.980 00:07:51.980 real 0m10.780s 00:07:51.980 user 0m41.215s 00:07:51.980 sys 0m1.629s 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.981 ************************************ 00:07:51.981 END TEST nvmf_filesystem_in_capsule 00:07:51.981 ************************************ 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:51.981 rmmod nvme_tcp 00:07:51.981 rmmod nvme_fabrics 00:07:51.981 rmmod nvme_keyring 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.981 17:43:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.905 17:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:53.905 00:07:53.905 real 0m28.213s 00:07:53.905 user 1m31.055s 00:07:53.905 sys 0m5.052s 00:07:53.905 17:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.905 17:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.905 ************************************ 00:07:53.905 END TEST nvmf_filesystem 00:07:53.905 ************************************ 00:07:53.905 17:43:28 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:53.905 17:43:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:53.905 17:43:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.905 17:43:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.905 ************************************ 00:07:53.905 START TEST nvmf_target_discovery 00:07:53.905 ************************************ 00:07:53.905 17:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:54.164 * Looking for test storage... 00:07:54.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:54.164 17:43:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:56.079 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:56.079 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:56.079 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:56.079 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.079 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:56.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:07:56.337 00:07:56.337 --- 10.0.0.2 ping statistics --- 00:07:56.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.337 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:07:56.337 00:07:56.337 --- 10.0.0.1 ping statistics --- 00:07:56.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.337 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.337 17:43:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:56.338 17:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.338 17:43:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:56.338 17:43:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.338 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=840355 00:07:56.338 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:56.338 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 840355 00:07:56.338 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 840355 ']' 00:07:56.338 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.338 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:56.338 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.338 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:56.338 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.338 [2024-07-20 17:43:31.048607] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:56.338 [2024-07-20 17:43:31.048699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.338 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.338 [2024-07-20 17:43:31.118586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.595 [2024-07-20 17:43:31.213942] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.595 [2024-07-20 17:43:31.214006] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.595 [2024-07-20 17:43:31.214031] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.595 [2024-07-20 17:43:31.214046] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.595 [2024-07-20 17:43:31.214066] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.595 [2024-07-20 17:43:31.214161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.595 [2024-07-20 17:43:31.214224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.595 [2024-07-20 17:43:31.214274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.595 [2024-07-20 17:43:31.214277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.595 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:56.595 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:07:56.595 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:56.595 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.595 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.595 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.595 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:56.595 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.595 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.595 [2024-07-20 17:43:31.381756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.595 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 Null1 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 [2024-07-20 17:43:31.422156] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 Null2 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 Null3 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.852 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.852 Null4 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.853 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:07:57.109 00:07:57.109 Discovery Log Number of Records 6, Generation counter 6 00:07:57.109 =====Discovery Log Entry 0====== 00:07:57.109 trtype: tcp 00:07:57.109 adrfam: ipv4 00:07:57.109 subtype: current discovery subsystem 00:07:57.109 treq: not required 00:07:57.109 portid: 0 00:07:57.109 trsvcid: 4420 00:07:57.110 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:57.110 traddr: 10.0.0.2 00:07:57.110 eflags: explicit discovery connections, duplicate discovery information 00:07:57.110 sectype: none 00:07:57.110 =====Discovery Log Entry 1====== 00:07:57.110 trtype: tcp 00:07:57.110 adrfam: ipv4 00:07:57.110 subtype: nvme subsystem 00:07:57.110 treq: not required 00:07:57.110 portid: 0 00:07:57.110 trsvcid: 4420 00:07:57.110 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:57.110 traddr: 10.0.0.2 00:07:57.110 eflags: none 00:07:57.110 sectype: none 00:07:57.110 =====Discovery Log Entry 2====== 00:07:57.110 trtype: tcp 00:07:57.110 adrfam: ipv4 00:07:57.110 subtype: nvme subsystem 00:07:57.110 treq: not required 00:07:57.110 portid: 0 00:07:57.110 trsvcid: 4420 00:07:57.110 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:57.110 traddr: 10.0.0.2 00:07:57.110 eflags: none 00:07:57.110 sectype: none 00:07:57.110 =====Discovery Log Entry 3====== 00:07:57.110 trtype: tcp 00:07:57.110 adrfam: ipv4 00:07:57.110 subtype: nvme subsystem 00:07:57.110 treq: not required 00:07:57.110 portid: 0 00:07:57.110 trsvcid: 4420 00:07:57.110 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:57.110 traddr: 10.0.0.2 00:07:57.110 eflags: none 00:07:57.110 sectype: none 00:07:57.110 =====Discovery Log Entry 4====== 00:07:57.110 trtype: tcp 00:07:57.110 adrfam: ipv4 00:07:57.110 subtype: nvme subsystem 00:07:57.110 treq: not required 00:07:57.110 portid: 0 00:07:57.110 trsvcid: 4420 00:07:57.110 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:57.110 traddr: 10.0.0.2 00:07:57.110 eflags: none 00:07:57.110 sectype: none 00:07:57.110 =====Discovery Log Entry 5====== 00:07:57.110 trtype: tcp 00:07:57.110 adrfam: ipv4 00:07:57.110 subtype: discovery subsystem referral 00:07:57.110 treq: not required 00:07:57.110 portid: 0 00:07:57.110 trsvcid: 4430 00:07:57.110 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:57.110 traddr: 10.0.0.2 00:07:57.110 eflags: none 00:07:57.110 sectype: none 00:07:57.110 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:57.110 Perform nvmf subsystem discovery via RPC 00:07:57.110 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:57.110 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.110 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.110 [ 00:07:57.110 { 00:07:57.110 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:57.110 "subtype": "Discovery", 00:07:57.110 "listen_addresses": [ 00:07:57.110 { 00:07:57.110 "trtype": "TCP", 00:07:57.110 "adrfam": "IPv4", 00:07:57.110 "traddr": "10.0.0.2", 00:07:57.110 "trsvcid": "4420" 00:07:57.110 } 00:07:57.110 ], 00:07:57.110 "allow_any_host": true, 00:07:57.110 "hosts": [] 00:07:57.110 }, 00:07:57.110 { 00:07:57.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.110 "subtype": "NVMe", 00:07:57.110 "listen_addresses": [ 00:07:57.110 { 00:07:57.110 "trtype": "TCP", 00:07:57.110 "adrfam": "IPv4", 00:07:57.110 "traddr": "10.0.0.2", 00:07:57.110 "trsvcid": "4420" 00:07:57.110 } 00:07:57.110 ], 00:07:57.110 "allow_any_host": true, 00:07:57.110 "hosts": [], 00:07:57.110 "serial_number": "SPDK00000000000001", 00:07:57.110 "model_number": "SPDK bdev Controller", 00:07:57.110 "max_namespaces": 32, 00:07:57.110 "min_cntlid": 1, 00:07:57.110 "max_cntlid": 65519, 00:07:57.110 "namespaces": [ 00:07:57.110 { 00:07:57.110 "nsid": 1, 00:07:57.110 "bdev_name": "Null1", 00:07:57.110 "name": "Null1", 00:07:57.110 "nguid": "D4DDC1E636984E3D84BE86AA0DA87004", 00:07:57.110 "uuid": "d4ddc1e6-3698-4e3d-84be-86aa0da87004" 00:07:57.110 } 00:07:57.110 ] 00:07:57.110 }, 00:07:57.110 { 00:07:57.110 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:57.110 "subtype": "NVMe", 00:07:57.110 "listen_addresses": [ 00:07:57.110 { 00:07:57.110 "trtype": "TCP", 00:07:57.110 "adrfam": "IPv4", 00:07:57.110 "traddr": "10.0.0.2", 00:07:57.110 "trsvcid": "4420" 00:07:57.110 } 00:07:57.110 ], 00:07:57.110 "allow_any_host": true, 00:07:57.110 "hosts": [], 00:07:57.110 "serial_number": "SPDK00000000000002", 00:07:57.110 "model_number": "SPDK bdev Controller", 00:07:57.110 "max_namespaces": 32, 00:07:57.110 "min_cntlid": 1, 00:07:57.110 "max_cntlid": 65519, 00:07:57.110 "namespaces": [ 00:07:57.110 { 00:07:57.110 "nsid": 1, 00:07:57.110 "bdev_name": "Null2", 00:07:57.110 "name": "Null2", 00:07:57.110 "nguid": "7DEAC2704BCB486890FCE9DBED122922", 00:07:57.110 "uuid": "7deac270-4bcb-4868-90fc-e9dbed122922" 00:07:57.110 } 00:07:57.110 ] 00:07:57.110 }, 00:07:57.110 { 00:07:57.110 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:57.110 "subtype": "NVMe", 00:07:57.110 "listen_addresses": [ 00:07:57.110 { 00:07:57.110 "trtype": "TCP", 00:07:57.110 "adrfam": "IPv4", 00:07:57.110 "traddr": "10.0.0.2", 00:07:57.110 "trsvcid": "4420" 00:07:57.110 } 00:07:57.110 ], 00:07:57.110 "allow_any_host": true, 00:07:57.110 "hosts": [], 00:07:57.110 "serial_number": "SPDK00000000000003", 00:07:57.110 "model_number": "SPDK bdev Controller", 00:07:57.110 "max_namespaces": 32, 00:07:57.110 "min_cntlid": 1, 00:07:57.110 "max_cntlid": 65519, 00:07:57.110 "namespaces": [ 00:07:57.110 { 00:07:57.110 "nsid": 1, 00:07:57.110 "bdev_name": "Null3", 00:07:57.110 "name": "Null3", 00:07:57.110 "nguid": "86749D38D83945C3AFFA79BD01D6DB79", 00:07:57.110 "uuid": "86749d38-d839-45c3-affa-79bd01d6db79" 00:07:57.110 } 00:07:57.110 ] 00:07:57.110 }, 00:07:57.110 { 00:07:57.110 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:57.110 "subtype": "NVMe", 00:07:57.110 "listen_addresses": [ 00:07:57.110 { 00:07:57.110 "trtype": "TCP", 00:07:57.110 "adrfam": "IPv4", 00:07:57.110 "traddr": "10.0.0.2", 00:07:57.110 "trsvcid": "4420" 00:07:57.110 } 00:07:57.110 ], 00:07:57.110 "allow_any_host": true, 00:07:57.110 "hosts": [], 00:07:57.110 "serial_number": "SPDK00000000000004", 00:07:57.110 "model_number": "SPDK bdev Controller", 00:07:57.110 "max_namespaces": 32, 00:07:57.110 "min_cntlid": 1, 00:07:57.110 "max_cntlid": 65519, 00:07:57.110 "namespaces": [ 00:07:57.110 { 00:07:57.110 "nsid": 1, 00:07:57.110 "bdev_name": "Null4", 00:07:57.110 "name": "Null4", 00:07:57.110 "nguid": "4DE7C9525CBB4CC6A68DE3044FE53549", 00:07:57.111 "uuid": "4de7c952-5cbb-4cc6-a68d-e3044fe53549" 00:07:57.111 } 00:07:57.111 ] 00:07:57.111 } 00:07:57.111 ] 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.111 rmmod nvme_tcp 00:07:57.111 rmmod nvme_fabrics 00:07:57.111 rmmod nvme_keyring 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 840355 ']' 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 840355 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 840355 ']' 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 840355 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 840355 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 840355' 00:07:57.111 killing process with pid 840355 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 840355 00:07:57.111 17:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 840355 00:07:57.368 17:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:57.368 17:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:57.368 17:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:57.368 17:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:57.368 17:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:57.368 17:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.368 17:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.368 17:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.892 17:43:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:59.892 00:07:59.892 real 0m5.502s 00:07:59.892 user 0m4.387s 00:07:59.892 sys 0m1.895s 00:07:59.892 17:43:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.892 17:43:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.892 ************************************ 00:07:59.892 END TEST nvmf_target_discovery 00:07:59.892 ************************************ 00:07:59.892 17:43:34 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:59.892 17:43:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:59.892 17:43:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.892 17:43:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:59.892 ************************************ 00:07:59.892 START TEST nvmf_referrals 00:07:59.892 ************************************ 00:07:59.892 17:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:59.892 * Looking for test storage... 00:07:59.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.892 17:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.892 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:59.892 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.892 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.892 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.892 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.892 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.892 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:59.893 17:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:01.792 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:01.792 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.792 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:01.793 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:01.793 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:01.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:08:01.793 00:08:01.793 --- 10.0.0.2 ping statistics --- 00:08:01.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.793 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:08:01.793 00:08:01.793 --- 10.0.0.1 ping statistics --- 00:08:01.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.793 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=842443 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 842443 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 842443 ']' 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:01.793 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.793 [2024-07-20 17:43:36.514563] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:01.793 [2024-07-20 17:43:36.514635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.793 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.793 [2024-07-20 17:43:36.581114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.052 [2024-07-20 17:43:36.675537] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.052 [2024-07-20 17:43:36.675595] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.052 [2024-07-20 17:43:36.675611] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.052 [2024-07-20 17:43:36.675625] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.052 [2024-07-20 17:43:36.675637] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.052 [2024-07-20 17:43:36.675721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.052 [2024-07-20 17:43:36.675778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.052 [2024-07-20 17:43:36.675829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.052 [2024-07-20 17:43:36.675833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.052 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:02.052 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:02.052 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.052 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.052 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.052 17:43:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.052 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.052 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.052 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.052 [2024-07-20 17:43:36.841550] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.310 [2024-07-20 17:43:36.853824] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:02.310 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:02.311 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:02.311 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:02.311 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:02.311 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:02.311 17:43:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.311 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:02.568 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:02.569 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:02.569 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:02.569 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:02.825 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:02.825 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:02.825 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:02.825 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:02.825 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:02.825 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:02.825 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:02.825 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:02.825 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:02.825 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:02.825 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:02.825 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:02.825 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:03.081 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:03.338 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.339 17:43:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:03.339 rmmod nvme_tcp 00:08:03.339 rmmod nvme_fabrics 00:08:03.339 rmmod nvme_keyring 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 842443 ']' 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 842443 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 842443 ']' 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 842443 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:03.339 17:43:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 842443 00:08:03.597 17:43:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:03.597 17:43:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:03.597 17:43:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 842443' 00:08:03.597 killing process with pid 842443 00:08:03.597 17:43:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 842443 00:08:03.597 17:43:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 842443 00:08:03.597 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:03.597 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:03.597 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:03.597 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.597 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:03.597 17:43:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.597 17:43:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.597 17:43:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.126 17:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:06.126 00:08:06.126 real 0m6.210s 00:08:06.126 user 0m8.092s 00:08:06.126 sys 0m1.951s 00:08:06.126 17:43:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:06.126 17:43:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.126 ************************************ 00:08:06.126 END TEST nvmf_referrals 00:08:06.126 ************************************ 00:08:06.126 17:43:40 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:06.126 17:43:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:06.126 17:43:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.126 17:43:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:06.126 ************************************ 00:08:06.126 START TEST nvmf_connect_disconnect 00:08:06.126 ************************************ 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:06.126 * Looking for test storage... 00:08:06.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:06.126 17:43:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.065 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:08.066 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:08.066 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:08.066 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:08.066 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:08.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:08:08.066 00:08:08.066 --- 10.0.0.2 ping statistics --- 00:08:08.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.066 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:08:08.066 00:08:08.066 --- 10.0.0.1 ping statistics --- 00:08:08.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.066 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=844725 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 844725 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 844725 ']' 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:08.066 17:43:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:08.066 [2024-07-20 17:43:42.796638] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:08.066 [2024-07-20 17:43:42.796732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.325 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.325 [2024-07-20 17:43:42.875597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.325 [2024-07-20 17:43:42.974817] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.325 [2024-07-20 17:43:42.974876] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.325 [2024-07-20 17:43:42.974893] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.325 [2024-07-20 17:43:42.974907] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.325 [2024-07-20 17:43:42.974918] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.325 [2024-07-20 17:43:42.974973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.325 [2024-07-20 17:43:42.975043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.325 [2024-07-20 17:43:42.975004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.325 [2024-07-20 17:43:42.975046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.325 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:08.325 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:08.325 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:08.325 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.325 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:08.583 [2024-07-20 17:43:43.127573] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:08.583 [2024-07-20 17:43:43.180830] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:08.583 17:43:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:11.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:56.173 rmmod nvme_tcp 00:11:56.173 rmmod nvme_fabrics 00:11:56.173 rmmod nvme_keyring 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 844725 ']' 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 844725 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 844725 ']' 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 844725 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 844725 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 844725' 00:11:56.173 killing process with pid 844725 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 844725 00:11:56.173 17:47:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 844725 00:11:56.429 17:47:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.429 17:47:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:56.429 17:47:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:56.429 17:47:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.429 17:47:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:56.429 17:47:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.429 17:47:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.429 17:47:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.954 17:47:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:58.954 00:11:58.954 real 3m52.718s 00:11:58.954 user 14m44.090s 00:11:58.954 sys 0m32.359s 00:11:58.954 17:47:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:58.954 17:47:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:58.954 ************************************ 00:11:58.954 END TEST nvmf_connect_disconnect 00:11:58.954 ************************************ 00:11:58.954 17:47:33 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:58.954 17:47:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:58.954 17:47:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:58.954 17:47:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:58.954 ************************************ 00:11:58.954 START TEST nvmf_multitarget 00:11:58.954 ************************************ 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:58.954 * Looking for test storage... 00:11:58.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:58.954 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.955 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:58.955 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:58.955 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:58.955 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.955 17:47:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.955 17:47:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.955 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:58.955 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:58.955 17:47:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:58.955 17:47:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:00.854 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.854 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:00.855 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:00.855 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:00.855 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:00.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:12:00.855 00:12:00.855 --- 10.0.0.2 ping statistics --- 00:12:00.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.855 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:12:00.855 00:12:00.855 --- 10.0.0.1 ping statistics --- 00:12:00.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.855 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=875367 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 875367 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 875367 ']' 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:00.855 17:47:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:00.855 [2024-07-20 17:47:35.503052] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:00.855 [2024-07-20 17:47:35.503151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.855 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.855 [2024-07-20 17:47:35.572902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.113 [2024-07-20 17:47:35.667489] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.113 [2024-07-20 17:47:35.667548] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.113 [2024-07-20 17:47:35.667575] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.113 [2024-07-20 17:47:35.667589] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.113 [2024-07-20 17:47:35.667601] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.113 [2024-07-20 17:47:35.667680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.113 [2024-07-20 17:47:35.667735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.113 [2024-07-20 17:47:35.667789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.113 [2024-07-20 17:47:35.667799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.113 17:47:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:01.113 17:47:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:01.113 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:01.113 17:47:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.113 17:47:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:01.113 17:47:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.113 17:47:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:01.113 17:47:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:01.113 17:47:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:01.369 17:47:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:01.369 17:47:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:01.369 "nvmf_tgt_1" 00:12:01.369 17:47:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:01.369 "nvmf_tgt_2" 00:12:01.626 17:47:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:01.626 17:47:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:01.626 17:47:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:01.626 17:47:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:01.626 true 00:12:01.626 17:47:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:01.883 true 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:01.883 rmmod nvme_tcp 00:12:01.883 rmmod nvme_fabrics 00:12:01.883 rmmod nvme_keyring 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:01.883 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 875367 ']' 00:12:01.884 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 875367 00:12:01.884 17:47:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 875367 ']' 00:12:01.884 17:47:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 875367 00:12:01.884 17:47:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:01.884 17:47:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:01.884 17:47:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 875367 00:12:01.884 17:47:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:01.884 17:47:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:01.884 17:47:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 875367' 00:12:01.884 killing process with pid 875367 00:12:01.884 17:47:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 875367 00:12:01.884 17:47:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 875367 00:12:02.141 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:02.141 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:02.141 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:02.141 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.141 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.141 17:47:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.141 17:47:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.141 17:47:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.669 17:47:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:04.669 00:12:04.669 real 0m5.688s 00:12:04.669 user 0m6.269s 00:12:04.669 sys 0m1.911s 00:12:04.669 17:47:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:04.669 17:47:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:04.669 ************************************ 00:12:04.669 END TEST nvmf_multitarget 00:12:04.669 ************************************ 00:12:04.669 17:47:38 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:04.669 17:47:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:04.669 17:47:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:04.669 17:47:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:04.669 ************************************ 00:12:04.669 START TEST nvmf_rpc 00:12:04.669 ************************************ 00:12:04.669 17:47:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:04.669 * Looking for test storage... 00:12:04.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:04.669 17:47:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:06.597 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:06.597 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:06.597 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:06.597 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:06.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:12:06.597 00:12:06.597 --- 10.0.0.2 ping statistics --- 00:12:06.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.597 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:06.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:12:06.597 00:12:06.597 --- 10.0.0.1 ping statistics --- 00:12:06.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.597 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=877464 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 877464 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 877464 ']' 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:06.597 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.597 [2024-07-20 17:47:41.338885] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:06.597 [2024-07-20 17:47:41.338979] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.597 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.854 [2024-07-20 17:47:41.409275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.854 [2024-07-20 17:47:41.505561] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.854 [2024-07-20 17:47:41.505609] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.854 [2024-07-20 17:47:41.505635] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.854 [2024-07-20 17:47:41.505649] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.854 [2024-07-20 17:47:41.505661] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.854 [2024-07-20 17:47:41.507435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.854 [2024-07-20 17:47:41.507527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.854 [2024-07-20 17:47:41.507735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.854 [2024-07-20 17:47:41.507739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.854 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:06.854 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:06.854 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:06.854 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.854 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:07.112 "tick_rate": 2700000000, 00:12:07.112 "poll_groups": [ 00:12:07.112 { 00:12:07.112 "name": "nvmf_tgt_poll_group_000", 00:12:07.112 "admin_qpairs": 0, 00:12:07.112 "io_qpairs": 0, 00:12:07.112 "current_admin_qpairs": 0, 00:12:07.112 "current_io_qpairs": 0, 00:12:07.112 "pending_bdev_io": 0, 00:12:07.112 "completed_nvme_io": 0, 00:12:07.112 "transports": [] 00:12:07.112 }, 00:12:07.112 { 00:12:07.112 "name": "nvmf_tgt_poll_group_001", 00:12:07.112 "admin_qpairs": 0, 00:12:07.112 "io_qpairs": 0, 00:12:07.112 "current_admin_qpairs": 0, 00:12:07.112 "current_io_qpairs": 0, 00:12:07.112 "pending_bdev_io": 0, 00:12:07.112 "completed_nvme_io": 0, 00:12:07.112 "transports": [] 00:12:07.112 }, 00:12:07.112 { 00:12:07.112 "name": "nvmf_tgt_poll_group_002", 00:12:07.112 "admin_qpairs": 0, 00:12:07.112 "io_qpairs": 0, 00:12:07.112 "current_admin_qpairs": 0, 00:12:07.112 "current_io_qpairs": 0, 00:12:07.112 "pending_bdev_io": 0, 00:12:07.112 "completed_nvme_io": 0, 00:12:07.112 "transports": [] 00:12:07.112 }, 00:12:07.112 { 00:12:07.112 "name": "nvmf_tgt_poll_group_003", 00:12:07.112 "admin_qpairs": 0, 00:12:07.112 "io_qpairs": 0, 00:12:07.112 "current_admin_qpairs": 0, 00:12:07.112 "current_io_qpairs": 0, 00:12:07.112 "pending_bdev_io": 0, 00:12:07.112 "completed_nvme_io": 0, 00:12:07.112 "transports": [] 00:12:07.112 } 00:12:07.112 ] 00:12:07.112 }' 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.112 [2024-07-20 17:47:41.744848] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:07.112 "tick_rate": 2700000000, 00:12:07.112 "poll_groups": [ 00:12:07.112 { 00:12:07.112 "name": "nvmf_tgt_poll_group_000", 00:12:07.112 "admin_qpairs": 0, 00:12:07.112 "io_qpairs": 0, 00:12:07.112 "current_admin_qpairs": 0, 00:12:07.112 "current_io_qpairs": 0, 00:12:07.112 "pending_bdev_io": 0, 00:12:07.112 "completed_nvme_io": 0, 00:12:07.112 "transports": [ 00:12:07.112 { 00:12:07.112 "trtype": "TCP" 00:12:07.112 } 00:12:07.112 ] 00:12:07.112 }, 00:12:07.112 { 00:12:07.112 "name": "nvmf_tgt_poll_group_001", 00:12:07.112 "admin_qpairs": 0, 00:12:07.112 "io_qpairs": 0, 00:12:07.112 "current_admin_qpairs": 0, 00:12:07.112 "current_io_qpairs": 0, 00:12:07.112 "pending_bdev_io": 0, 00:12:07.112 "completed_nvme_io": 0, 00:12:07.112 "transports": [ 00:12:07.112 { 00:12:07.112 "trtype": "TCP" 00:12:07.112 } 00:12:07.112 ] 00:12:07.112 }, 00:12:07.112 { 00:12:07.112 "name": "nvmf_tgt_poll_group_002", 00:12:07.112 "admin_qpairs": 0, 00:12:07.112 "io_qpairs": 0, 00:12:07.112 "current_admin_qpairs": 0, 00:12:07.112 "current_io_qpairs": 0, 00:12:07.112 "pending_bdev_io": 0, 00:12:07.112 "completed_nvme_io": 0, 00:12:07.112 "transports": [ 00:12:07.112 { 00:12:07.112 "trtype": "TCP" 00:12:07.112 } 00:12:07.112 ] 00:12:07.112 }, 00:12:07.112 { 00:12:07.112 "name": "nvmf_tgt_poll_group_003", 00:12:07.112 "admin_qpairs": 0, 00:12:07.112 "io_qpairs": 0, 00:12:07.112 "current_admin_qpairs": 0, 00:12:07.112 "current_io_qpairs": 0, 00:12:07.112 "pending_bdev_io": 0, 00:12:07.112 "completed_nvme_io": 0, 00:12:07.112 "transports": [ 00:12:07.112 { 00:12:07.112 "trtype": "TCP" 00:12:07.112 } 00:12:07.112 ] 00:12:07.112 } 00:12:07.112 ] 00:12:07.112 }' 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:07.112 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.113 Malloc1 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.113 [2024-07-20 17:47:41.884142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:07.113 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:07.113 [2024-07-20 17:47:41.906693] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:07.113 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:07.113 could not add new controller: failed to write to nvme-fabrics device 00:12:07.370 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:07.370 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:07.370 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:07.370 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:07.370 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:07.371 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.371 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.371 17:47:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.371 17:47:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.935 17:47:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.935 17:47:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:07.935 17:47:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.935 17:47:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:07.935 17:47:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:09.829 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:09.829 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:09.829 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.829 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:09.829 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.829 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:09.829 17:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.829 17:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.829 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:09.829 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:09.830 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.086 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:10.086 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.086 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:10.086 17:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:10.086 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.086 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.086 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.086 17:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.087 [2024-07-20 17:47:44.659808] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:10.087 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:10.087 could not add new controller: failed to write to nvme-fabrics device 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.087 17:47:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.649 17:47:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.649 17:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:10.649 17:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.649 17:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:10.649 17:47:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:12.542 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:12.542 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:12.542 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.542 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:12.542 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.542 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:12.542 17:47:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 [2024-07-20 17:47:47.398158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.799 17:47:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.362 17:47:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.362 17:47:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:13.362 17:47:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.362 17:47:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:13.362 17:47:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:15.882 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.883 [2024-07-20 17:47:50.162655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.883 17:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.140 17:47:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:16.140 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:16.140 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.140 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:16.140 17:47:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.036 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.294 [2024-07-20 17:47:52.850892] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.294 17:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.858 17:47:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.858 17:47:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:18.858 17:47:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.858 17:47:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:18.858 17:47:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.750 [2024-07-20 17:47:55.526872] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.750 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.007 17:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.007 17:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.582 17:47:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.582 17:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:21.582 17:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.582 17:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:21.582 17:47:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.504 [2024-07-20 17:47:58.249744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.504 17:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.068 17:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.068 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:24.068 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.068 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:24.068 17:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 [2024-07-20 17:48:01.019449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 [2024-07-20 17:48:01.067476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 [2024-07-20 17:48:01.115646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 [2024-07-20 17:48:01.163857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.591 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.592 [2024-07-20 17:48:01.211998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:26.592 "tick_rate": 2700000000, 00:12:26.592 "poll_groups": [ 00:12:26.592 { 00:12:26.592 "name": "nvmf_tgt_poll_group_000", 00:12:26.592 "admin_qpairs": 2, 00:12:26.592 "io_qpairs": 84, 00:12:26.592 "current_admin_qpairs": 0, 00:12:26.592 "current_io_qpairs": 0, 00:12:26.592 "pending_bdev_io": 0, 00:12:26.592 "completed_nvme_io": 186, 00:12:26.592 "transports": [ 00:12:26.592 { 00:12:26.592 "trtype": "TCP" 00:12:26.592 } 00:12:26.592 ] 00:12:26.592 }, 00:12:26.592 { 00:12:26.592 "name": "nvmf_tgt_poll_group_001", 00:12:26.592 "admin_qpairs": 2, 00:12:26.592 "io_qpairs": 84, 00:12:26.592 "current_admin_qpairs": 0, 00:12:26.592 "current_io_qpairs": 0, 00:12:26.592 "pending_bdev_io": 0, 00:12:26.592 "completed_nvme_io": 183, 00:12:26.592 "transports": [ 00:12:26.592 { 00:12:26.592 "trtype": "TCP" 00:12:26.592 } 00:12:26.592 ] 00:12:26.592 }, 00:12:26.592 { 00:12:26.592 "name": "nvmf_tgt_poll_group_002", 00:12:26.592 "admin_qpairs": 1, 00:12:26.592 "io_qpairs": 84, 00:12:26.592 "current_admin_qpairs": 0, 00:12:26.592 "current_io_qpairs": 0, 00:12:26.592 "pending_bdev_io": 0, 00:12:26.592 "completed_nvme_io": 134, 00:12:26.592 "transports": [ 00:12:26.592 { 00:12:26.592 "trtype": "TCP" 00:12:26.592 } 00:12:26.592 ] 00:12:26.592 }, 00:12:26.592 { 00:12:26.592 "name": "nvmf_tgt_poll_group_003", 00:12:26.592 "admin_qpairs": 2, 00:12:26.592 "io_qpairs": 84, 00:12:26.592 "current_admin_qpairs": 0, 00:12:26.592 "current_io_qpairs": 0, 00:12:26.592 "pending_bdev_io": 0, 00:12:26.592 "completed_nvme_io": 183, 00:12:26.592 "transports": [ 00:12:26.592 { 00:12:26.592 "trtype": "TCP" 00:12:26.592 } 00:12:26.592 ] 00:12:26.592 } 00:12:26.592 ] 00:12:26.592 }' 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.592 rmmod nvme_tcp 00:12:26.592 rmmod nvme_fabrics 00:12:26.592 rmmod nvme_keyring 00:12:26.592 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 877464 ']' 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 877464 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 877464 ']' 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 877464 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 877464 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 877464' 00:12:26.849 killing process with pid 877464 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 877464 00:12:26.849 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 877464 00:12:27.106 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.106 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:27.106 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:27.106 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.106 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:27.106 17:48:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.106 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.106 17:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.058 17:48:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:29.058 00:12:29.058 real 0m24.731s 00:12:29.058 user 1m19.743s 00:12:29.058 sys 0m3.802s 00:12:29.058 17:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:29.058 17:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.058 ************************************ 00:12:29.058 END TEST nvmf_rpc 00:12:29.058 ************************************ 00:12:29.058 17:48:03 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:29.058 17:48:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:29.058 17:48:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:29.058 17:48:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.058 ************************************ 00:12:29.058 START TEST nvmf_invalid 00:12:29.058 ************************************ 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:29.058 * Looking for test storage... 00:12:29.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:29.058 17:48:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.955 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.956 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:30.956 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:31.214 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:31.214 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:31.214 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.214 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:12:31.215 00:12:31.215 --- 10.0.0.2 ping statistics --- 00:12:31.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.215 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:12:31.215 00:12:31.215 --- 10.0.0.1 ping statistics --- 00:12:31.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.215 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=882057 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 882057 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 882057 ']' 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:31.215 17:48:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:31.215 [2024-07-20 17:48:05.972047] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:31.215 [2024-07-20 17:48:05.972129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.215 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.472 [2024-07-20 17:48:06.042368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.472 [2024-07-20 17:48:06.136848] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.472 [2024-07-20 17:48:06.136911] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.472 [2024-07-20 17:48:06.136928] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.472 [2024-07-20 17:48:06.136941] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.472 [2024-07-20 17:48:06.136953] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.472 [2024-07-20 17:48:06.137013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.472 [2024-07-20 17:48:06.137068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.472 [2024-07-20 17:48:06.137123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.472 [2024-07-20 17:48:06.137126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.728 17:48:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:31.728 17:48:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:31.728 17:48:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.728 17:48:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:31.728 17:48:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:31.728 17:48:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.728 17:48:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:31.728 17:48:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27953 00:12:31.985 [2024-07-20 17:48:06.557524] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:31.985 17:48:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:31.985 { 00:12:31.985 "nqn": "nqn.2016-06.io.spdk:cnode27953", 00:12:31.985 "tgt_name": "foobar", 00:12:31.985 "method": "nvmf_create_subsystem", 00:12:31.985 "req_id": 1 00:12:31.985 } 00:12:31.985 Got JSON-RPC error response 00:12:31.985 response: 00:12:31.985 { 00:12:31.985 "code": -32603, 00:12:31.985 "message": "Unable to find target foobar" 00:12:31.985 }' 00:12:31.985 17:48:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:31.985 { 00:12:31.985 "nqn": "nqn.2016-06.io.spdk:cnode27953", 00:12:31.985 "tgt_name": "foobar", 00:12:31.985 "method": "nvmf_create_subsystem", 00:12:31.985 "req_id": 1 00:12:31.985 } 00:12:31.985 Got JSON-RPC error response 00:12:31.985 response: 00:12:31.986 { 00:12:31.986 "code": -32603, 00:12:31.986 "message": "Unable to find target foobar" 00:12:31.986 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:31.986 17:48:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:31.986 17:48:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3498 00:12:32.242 [2024-07-20 17:48:06.826387] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3498: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:32.242 17:48:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:32.242 { 00:12:32.242 "nqn": "nqn.2016-06.io.spdk:cnode3498", 00:12:32.242 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:32.242 "method": "nvmf_create_subsystem", 00:12:32.242 "req_id": 1 00:12:32.242 } 00:12:32.242 Got JSON-RPC error response 00:12:32.242 response: 00:12:32.242 { 00:12:32.242 "code": -32602, 00:12:32.242 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:32.242 }' 00:12:32.242 17:48:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:32.242 { 00:12:32.242 "nqn": "nqn.2016-06.io.spdk:cnode3498", 00:12:32.242 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:32.242 "method": "nvmf_create_subsystem", 00:12:32.242 "req_id": 1 00:12:32.242 } 00:12:32.242 Got JSON-RPC error response 00:12:32.242 response: 00:12:32.242 { 00:12:32.242 "code": -32602, 00:12:32.242 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:32.242 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:32.242 17:48:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:32.242 17:48:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7712 00:12:32.500 [2024-07-20 17:48:07.087285] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7712: invalid model number 'SPDK_Controller' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:32.500 { 00:12:32.500 "nqn": "nqn.2016-06.io.spdk:cnode7712", 00:12:32.500 "model_number": "SPDK_Controller\u001f", 00:12:32.500 "method": "nvmf_create_subsystem", 00:12:32.500 "req_id": 1 00:12:32.500 } 00:12:32.500 Got JSON-RPC error response 00:12:32.500 response: 00:12:32.500 { 00:12:32.500 "code": -32602, 00:12:32.500 "message": "Invalid MN SPDK_Controller\u001f" 00:12:32.500 }' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:32.500 { 00:12:32.500 "nqn": "nqn.2016-06.io.spdk:cnode7712", 00:12:32.500 "model_number": "SPDK_Controller\u001f", 00:12:32.500 "method": "nvmf_create_subsystem", 00:12:32.500 "req_id": 1 00:12:32.500 } 00:12:32.500 Got JSON-RPC error response 00:12:32.500 response: 00:12:32.500 { 00:12:32.500 "code": -32602, 00:12:32.500 "message": "Invalid MN SPDK_Controller\u001f" 00:12:32.500 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.500 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:32.501 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:32.501 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:32.501 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.501 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.501 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:32.501 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:32.501 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:32.501 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.501 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.501 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ s == \- ]] 00:12:32.501 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'sJ,R_!gZhUfz>U0zzJd95' 00:12:32.501 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'sJ,R_!gZhUfz>U0zzJd95' nqn.2016-06.io.spdk:cnode21053 00:12:32.758 [2024-07-20 17:48:07.436475] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21053: invalid serial number 'sJ,R_!gZhUfz>U0zzJd95' 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:32.758 { 00:12:32.758 "nqn": "nqn.2016-06.io.spdk:cnode21053", 00:12:32.758 "serial_number": "sJ,R_!gZhUfz>U0zzJd95", 00:12:32.758 "method": "nvmf_create_subsystem", 00:12:32.758 "req_id": 1 00:12:32.758 } 00:12:32.758 Got JSON-RPC error response 00:12:32.758 response: 00:12:32.758 { 00:12:32.758 "code": -32602, 00:12:32.758 "message": "Invalid SN sJ,R_!gZhUfz>U0zzJd95" 00:12:32.758 }' 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:32.758 { 00:12:32.758 "nqn": "nqn.2016-06.io.spdk:cnode21053", 00:12:32.758 "serial_number": "sJ,R_!gZhUfz>U0zzJd95", 00:12:32.758 "method": "nvmf_create_subsystem", 00:12:32.758 "req_id": 1 00:12:32.758 } 00:12:32.758 Got JSON-RPC error response 00:12:32.758 response: 00:12:32.758 { 00:12:32.758 "code": -32602, 00:12:32.758 "message": "Invalid SN sJ,R_!gZhUfz>U0zzJd95" 00:12:32.758 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:32.758 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.759 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '*;vrmC9}P:bifFn (~%a+1}X'\''c~L3Oj0R vXqv.v' 00:12:33.017 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '*;vrmC9}P:bifFn (~%a+1}X'\''c~L3Oj0R vXqv.v' nqn.2016-06.io.spdk:cnode7205 00:12:33.274 [2024-07-20 17:48:07.845832] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7205: invalid model number '*;vrmC9}P:bifFn (~%a+1}X'c~L3Oj0R vXqv.v' 00:12:33.274 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:33.274 { 00:12:33.274 "nqn": "nqn.2016-06.io.spdk:cnode7205", 00:12:33.274 "model_number": "*;vrmC9}P:bifFn (~%a+1}X'\''c~L3Oj0R vXqv.\u007fv", 00:12:33.274 "method": "nvmf_create_subsystem", 00:12:33.275 "req_id": 1 00:12:33.275 } 00:12:33.275 Got JSON-RPC error response 00:12:33.275 response: 00:12:33.275 { 00:12:33.275 "code": -32602, 00:12:33.275 "message": "Invalid MN *;vrmC9}P:bifFn (~%a+1}X'\''c~L3Oj0R vXqv.\u007fv" 00:12:33.275 }' 00:12:33.275 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:33.275 { 00:12:33.275 "nqn": "nqn.2016-06.io.spdk:cnode7205", 00:12:33.275 "model_number": "*;vrmC9}P:bifFn (~%a+1}X'c~L3Oj0R vXqv.\u007fv", 00:12:33.275 "method": "nvmf_create_subsystem", 00:12:33.275 "req_id": 1 00:12:33.275 } 00:12:33.275 Got JSON-RPC error response 00:12:33.275 response: 00:12:33.275 { 00:12:33.275 "code": -32602, 00:12:33.275 "message": "Invalid MN *;vrmC9}P:bifFn (~%a+1}X'c~L3Oj0R vXqv.\u007fv" 00:12:33.275 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:33.275 17:48:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:33.532 [2024-07-20 17:48:08.106805] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.532 17:48:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:33.802 17:48:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:33.802 17:48:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:33.802 17:48:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:33.802 17:48:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:33.802 17:48:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:34.059 [2024-07-20 17:48:08.620413] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:34.059 17:48:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:34.059 { 00:12:34.059 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:34.059 "listen_address": { 00:12:34.059 "trtype": "tcp", 00:12:34.059 "traddr": "", 00:12:34.059 "trsvcid": "4421" 00:12:34.059 }, 00:12:34.059 "method": "nvmf_subsystem_remove_listener", 00:12:34.059 "req_id": 1 00:12:34.059 } 00:12:34.059 Got JSON-RPC error response 00:12:34.059 response: 00:12:34.059 { 00:12:34.059 "code": -32602, 00:12:34.059 "message": "Invalid parameters" 00:12:34.059 }' 00:12:34.059 17:48:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:34.059 { 00:12:34.059 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:34.059 "listen_address": { 00:12:34.059 "trtype": "tcp", 00:12:34.059 "traddr": "", 00:12:34.059 "trsvcid": "4421" 00:12:34.059 }, 00:12:34.059 "method": "nvmf_subsystem_remove_listener", 00:12:34.059 "req_id": 1 00:12:34.059 } 00:12:34.059 Got JSON-RPC error response 00:12:34.059 response: 00:12:34.059 { 00:12:34.059 "code": -32602, 00:12:34.059 "message": "Invalid parameters" 00:12:34.059 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:34.059 17:48:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21040 -i 0 00:12:34.317 [2024-07-20 17:48:08.881241] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21040: invalid cntlid range [0-65519] 00:12:34.317 17:48:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:34.317 { 00:12:34.317 "nqn": "nqn.2016-06.io.spdk:cnode21040", 00:12:34.317 "min_cntlid": 0, 00:12:34.317 "method": "nvmf_create_subsystem", 00:12:34.317 "req_id": 1 00:12:34.317 } 00:12:34.317 Got JSON-RPC error response 00:12:34.317 response: 00:12:34.317 { 00:12:34.317 "code": -32602, 00:12:34.317 "message": "Invalid cntlid range [0-65519]" 00:12:34.317 }' 00:12:34.317 17:48:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:34.317 { 00:12:34.317 "nqn": "nqn.2016-06.io.spdk:cnode21040", 00:12:34.317 "min_cntlid": 0, 00:12:34.317 "method": "nvmf_create_subsystem", 00:12:34.317 "req_id": 1 00:12:34.317 } 00:12:34.317 Got JSON-RPC error response 00:12:34.317 response: 00:12:34.317 { 00:12:34.317 "code": -32602, 00:12:34.317 "message": "Invalid cntlid range [0-65519]" 00:12:34.317 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:34.317 17:48:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5751 -i 65520 00:12:34.574 [2024-07-20 17:48:09.142069] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5751: invalid cntlid range [65520-65519] 00:12:34.574 17:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:34.574 { 00:12:34.574 "nqn": "nqn.2016-06.io.spdk:cnode5751", 00:12:34.574 "min_cntlid": 65520, 00:12:34.574 "method": "nvmf_create_subsystem", 00:12:34.574 "req_id": 1 00:12:34.574 } 00:12:34.574 Got JSON-RPC error response 00:12:34.574 response: 00:12:34.574 { 00:12:34.574 "code": -32602, 00:12:34.574 "message": "Invalid cntlid range [65520-65519]" 00:12:34.574 }' 00:12:34.574 17:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:34.574 { 00:12:34.574 "nqn": "nqn.2016-06.io.spdk:cnode5751", 00:12:34.574 "min_cntlid": 65520, 00:12:34.574 "method": "nvmf_create_subsystem", 00:12:34.574 "req_id": 1 00:12:34.574 } 00:12:34.574 Got JSON-RPC error response 00:12:34.574 response: 00:12:34.574 { 00:12:34.574 "code": -32602, 00:12:34.574 "message": "Invalid cntlid range [65520-65519]" 00:12:34.574 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:34.574 17:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4337 -I 0 00:12:34.831 [2024-07-20 17:48:09.390921] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4337: invalid cntlid range [1-0] 00:12:34.831 17:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:34.831 { 00:12:34.831 "nqn": "nqn.2016-06.io.spdk:cnode4337", 00:12:34.831 "max_cntlid": 0, 00:12:34.831 "method": "nvmf_create_subsystem", 00:12:34.831 "req_id": 1 00:12:34.831 } 00:12:34.831 Got JSON-RPC error response 00:12:34.831 response: 00:12:34.831 { 00:12:34.831 "code": -32602, 00:12:34.831 "message": "Invalid cntlid range [1-0]" 00:12:34.831 }' 00:12:34.831 17:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:34.831 { 00:12:34.831 "nqn": "nqn.2016-06.io.spdk:cnode4337", 00:12:34.831 "max_cntlid": 0, 00:12:34.831 "method": "nvmf_create_subsystem", 00:12:34.831 "req_id": 1 00:12:34.831 } 00:12:34.831 Got JSON-RPC error response 00:12:34.831 response: 00:12:34.831 { 00:12:34.831 "code": -32602, 00:12:34.831 "message": "Invalid cntlid range [1-0]" 00:12:34.831 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:34.831 17:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode305 -I 65520 00:12:35.088 [2024-07-20 17:48:09.631732] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode305: invalid cntlid range [1-65520] 00:12:35.088 17:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:35.088 { 00:12:35.088 "nqn": "nqn.2016-06.io.spdk:cnode305", 00:12:35.088 "max_cntlid": 65520, 00:12:35.088 "method": "nvmf_create_subsystem", 00:12:35.088 "req_id": 1 00:12:35.088 } 00:12:35.088 Got JSON-RPC error response 00:12:35.088 response: 00:12:35.088 { 00:12:35.088 "code": -32602, 00:12:35.088 "message": "Invalid cntlid range [1-65520]" 00:12:35.088 }' 00:12:35.088 17:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:35.088 { 00:12:35.088 "nqn": "nqn.2016-06.io.spdk:cnode305", 00:12:35.088 "max_cntlid": 65520, 00:12:35.088 "method": "nvmf_create_subsystem", 00:12:35.088 "req_id": 1 00:12:35.088 } 00:12:35.088 Got JSON-RPC error response 00:12:35.088 response: 00:12:35.088 { 00:12:35.088 "code": -32602, 00:12:35.088 "message": "Invalid cntlid range [1-65520]" 00:12:35.088 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:35.088 17:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20529 -i 6 -I 5 00:12:35.088 [2024-07-20 17:48:09.872543] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20529: invalid cntlid range [6-5] 00:12:35.345 17:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:35.345 { 00:12:35.345 "nqn": "nqn.2016-06.io.spdk:cnode20529", 00:12:35.345 "min_cntlid": 6, 00:12:35.345 "max_cntlid": 5, 00:12:35.345 "method": "nvmf_create_subsystem", 00:12:35.345 "req_id": 1 00:12:35.345 } 00:12:35.345 Got JSON-RPC error response 00:12:35.345 response: 00:12:35.345 { 00:12:35.345 "code": -32602, 00:12:35.345 "message": "Invalid cntlid range [6-5]" 00:12:35.345 }' 00:12:35.345 17:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:35.345 { 00:12:35.345 "nqn": "nqn.2016-06.io.spdk:cnode20529", 00:12:35.345 "min_cntlid": 6, 00:12:35.345 "max_cntlid": 5, 00:12:35.345 "method": "nvmf_create_subsystem", 00:12:35.345 "req_id": 1 00:12:35.345 } 00:12:35.345 Got JSON-RPC error response 00:12:35.345 response: 00:12:35.345 { 00:12:35.345 "code": -32602, 00:12:35.345 "message": "Invalid cntlid range [6-5]" 00:12:35.345 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:35.345 17:48:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:35.346 { 00:12:35.346 "name": "foobar", 00:12:35.346 "method": "nvmf_delete_target", 00:12:35.346 "req_id": 1 00:12:35.346 } 00:12:35.346 Got JSON-RPC error response 00:12:35.346 response: 00:12:35.346 { 00:12:35.346 "code": -32602, 00:12:35.346 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:35.346 }' 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:35.346 { 00:12:35.346 "name": "foobar", 00:12:35.346 "method": "nvmf_delete_target", 00:12:35.346 "req_id": 1 00:12:35.346 } 00:12:35.346 Got JSON-RPC error response 00:12:35.346 response: 00:12:35.346 { 00:12:35.346 "code": -32602, 00:12:35.346 "message": "The specified target doesn't exist, cannot delete it." 00:12:35.346 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.346 rmmod nvme_tcp 00:12:35.346 rmmod nvme_fabrics 00:12:35.346 rmmod nvme_keyring 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 882057 ']' 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 882057 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 882057 ']' 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 882057 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 882057 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 882057' 00:12:35.346 killing process with pid 882057 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 882057 00:12:35.346 17:48:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 882057 00:12:35.604 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.604 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.604 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.604 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.604 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.604 17:48:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.604 17:48:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.604 17:48:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.133 17:48:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:38.133 00:12:38.133 real 0m8.593s 00:12:38.133 user 0m20.238s 00:12:38.133 sys 0m2.415s 00:12:38.133 17:48:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:38.133 17:48:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:38.133 ************************************ 00:12:38.133 END TEST nvmf_invalid 00:12:38.133 ************************************ 00:12:38.133 17:48:12 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:38.133 17:48:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:38.133 17:48:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:38.133 17:48:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:38.133 ************************************ 00:12:38.133 START TEST nvmf_abort 00:12:38.133 ************************************ 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:38.133 * Looking for test storage... 00:12:38.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:38.133 17:48:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:40.027 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.027 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:40.027 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:40.027 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:40.028 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:40.028 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:40.028 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:40.028 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:40.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:12:40.028 00:12:40.028 --- 10.0.0.2 ping statistics --- 00:12:40.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.028 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:12:40.028 00:12:40.028 --- 10.0.0.1 ping statistics --- 00:12:40.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.028 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=885079 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 885079 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 885079 ']' 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:40.028 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:40.028 [2024-07-20 17:48:14.557378] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:40.028 [2024-07-20 17:48:14.557467] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.028 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.028 [2024-07-20 17:48:14.623670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:40.028 [2024-07-20 17:48:14.710693] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.028 [2024-07-20 17:48:14.710738] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.028 [2024-07-20 17:48:14.710766] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.028 [2024-07-20 17:48:14.710778] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.028 [2024-07-20 17:48:14.710788] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.028 [2024-07-20 17:48:14.710918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.028 [2024-07-20 17:48:14.710977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.028 [2024-07-20 17:48:14.710980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:40.286 [2024-07-20 17:48:14.846688] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:40.286 Malloc0 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:40.286 Delay0 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:40.286 [2024-07-20 17:48:14.913458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.286 17:48:14 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:40.286 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.286 [2024-07-20 17:48:15.010514] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:42.814 Initializing NVMe Controllers 00:12:42.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:42.814 controller IO queue size 128 less than required 00:12:42.814 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:42.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:42.814 Initialization complete. Launching workers. 00:12:42.814 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 25671 00:12:42.814 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 25732, failed to submit 62 00:12:42.814 success 25675, unsuccess 57, failed 0 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:42.814 rmmod nvme_tcp 00:12:42.814 rmmod nvme_fabrics 00:12:42.814 rmmod nvme_keyring 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 885079 ']' 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 885079 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 885079 ']' 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 885079 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 885079 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 885079' 00:12:42.814 killing process with pid 885079 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 885079 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 885079 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:42.814 17:48:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.815 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.815 17:48:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.349 17:48:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:45.349 00:12:45.349 real 0m7.189s 00:12:45.349 user 0m10.695s 00:12:45.349 sys 0m2.476s 00:12:45.349 17:48:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:45.349 17:48:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:45.349 ************************************ 00:12:45.349 END TEST nvmf_abort 00:12:45.349 ************************************ 00:12:45.349 17:48:19 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:45.349 17:48:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:45.349 17:48:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:45.349 17:48:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:45.349 ************************************ 00:12:45.349 START TEST nvmf_ns_hotplug_stress 00:12:45.349 ************************************ 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:45.349 * Looking for test storage... 00:12:45.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:45.349 17:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.253 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.253 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:47.253 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:47.253 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:47.253 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:47.253 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:47.253 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:47.253 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:47.253 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:47.253 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:12:47.253 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:47.253 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:47.254 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:47.254 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:47.254 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:47.254 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:47.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:12:47.254 00:12:47.254 --- 10.0.0.2 ping statistics --- 00:12:47.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.254 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:12:47.254 00:12:47.254 --- 10.0.0.1 ping statistics --- 00:12:47.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.254 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=887419 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 887419 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 887419 ']' 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:47.254 17:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.254 [2024-07-20 17:48:21.830944] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:47.254 [2024-07-20 17:48:21.831020] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.254 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.254 [2024-07-20 17:48:21.896559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:47.254 [2024-07-20 17:48:21.981183] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.254 [2024-07-20 17:48:21.981237] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.254 [2024-07-20 17:48:21.981251] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.255 [2024-07-20 17:48:21.981263] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.255 [2024-07-20 17:48:21.981273] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.255 [2024-07-20 17:48:21.981416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.255 [2024-07-20 17:48:21.981483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.255 [2024-07-20 17:48:21.981486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.513 17:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:47.513 17:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:12:47.514 17:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.514 17:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.514 17:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.514 17:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.514 17:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:47.514 17:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:47.772 [2024-07-20 17:48:22.400752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.772 17:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:48.030 17:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.289 [2024-07-20 17:48:22.991762] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.289 17:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:48.545 17:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:48.802 Malloc0 00:12:48.802 17:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:49.059 Delay0 00:12:49.059 17:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.316 17:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:49.572 NULL1 00:12:49.572 17:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:50.137 17:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=887724 00:12:50.137 17:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:50.137 17:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:12:50.137 17:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.137 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.067 Read completed with error (sct=0, sc=11) 00:12:51.067 17:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.329 17:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:51.330 17:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:51.608 true 00:12:51.608 17:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:12:51.608 17:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.539 17:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.796 17:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:52.796 17:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:53.054 true 00:12:53.054 17:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:12:53.054 17:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.310 17:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.566 17:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:53.566 17:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:53.566 true 00:12:53.824 17:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:12:53.824 17:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.756 17:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:54.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:54.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:54.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:54.756 17:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:54.756 17:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:55.013 true 00:12:55.013 17:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:12:55.013 17:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.271 17:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.528 17:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:55.528 17:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:55.785 true 00:12:55.785 17:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:12:55.785 17:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.716 17:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.973 17:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:56.974 17:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:57.231 true 00:12:57.231 17:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:12:57.231 17:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.488 17:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.744 17:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:57.744 17:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:58.000 true 00:12:58.000 17:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:12:58.000 17:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.929 17:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.493 17:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:59.493 17:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:59.493 true 00:12:59.493 17:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:12:59.493 17:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.751 17:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.008 17:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:00.009 17:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:00.266 true 00:13:00.266 17:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:00.266 17:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.523 17:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.779 17:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:00.779 17:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:01.036 true 00:13:01.036 17:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:01.036 17:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.421 17:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.421 17:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:02.421 17:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:02.677 true 00:13:02.678 17:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:02.678 17:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.609 17:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.609 17:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:03.609 17:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:03.867 true 00:13:03.867 17:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:03.867 17:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.465 17:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.465 17:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:04.465 17:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:04.723 true 00:13:04.723 17:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:04.723 17:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.980 17:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.238 17:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:05.238 17:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:05.495 true 00:13:05.495 17:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:05.495 17:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.425 17:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.683 17:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:06.683 17:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:06.940 true 00:13:06.940 17:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:06.940 17:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.196 17:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.454 17:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:07.454 17:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:07.710 true 00:13:07.710 17:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:07.710 17:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.641 17:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.898 17:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:08.899 17:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:09.155 true 00:13:09.155 17:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:09.155 17:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.411 17:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.668 17:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:09.668 17:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:09.925 true 00:13:09.925 17:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:09.925 17:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.181 17:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.438 17:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:10.438 17:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:10.694 true 00:13:10.694 17:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:10.694 17:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.626 17:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.882 17:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:11.882 17:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:12.138 true 00:13:12.138 17:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:12.138 17:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.069 17:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.069 17:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:13.069 17:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:13.325 true 00:13:13.325 17:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:13.325 17:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.582 17:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.839 17:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:13.839 17:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:14.096 true 00:13:14.096 17:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:14.096 17:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.026 17:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.303 17:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:15.303 17:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:15.560 true 00:13:15.817 17:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:15.817 17:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.074 17:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.331 17:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:16.331 17:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:16.331 true 00:13:16.331 17:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:16.331 17:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.263 17:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.520 17:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:17.520 17:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:17.776 true 00:13:17.776 17:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:17.776 17:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.032 17:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.289 17:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:18.289 17:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:18.545 true 00:13:18.545 17:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:18.545 17:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.475 17:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.731 17:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:19.731 17:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:19.988 true 00:13:19.988 17:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:19.988 17:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.245 Initializing NVMe Controllers 00:13:20.245 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:20.245 Controller IO queue size 128, less than required. 00:13:20.245 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:20.245 Controller IO queue size 128, less than required. 00:13:20.245 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:20.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:20.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:20.245 Initialization complete. Launching workers. 00:13:20.245 ======================================================== 00:13:20.245 Latency(us) 00:13:20.245 Device Information : IOPS MiB/s Average min max 00:13:20.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1264.66 0.62 54376.03 2606.00 1011649.80 00:13:20.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11084.50 5.41 11549.17 3099.18 367630.39 00:13:20.245 ======================================================== 00:13:20.245 Total : 12349.16 6.03 15935.01 2606.00 1011649.80 00:13:20.245 00:13:20.245 17:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.502 17:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:20.502 17:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:20.760 true 00:13:20.760 17:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 887724 00:13:20.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (887724) - No such process 00:13:20.760 17:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 887724 00:13:20.760 17:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.017 17:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:21.297 17:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:21.297 17:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:21.297 17:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:21.297 17:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:21.297 17:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:21.554 null0 00:13:21.554 17:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:21.554 17:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:21.554 17:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:21.811 null1 00:13:21.811 17:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:21.811 17:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:21.811 17:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:22.068 null2 00:13:22.068 17:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:22.068 17:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:22.068 17:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:22.325 null3 00:13:22.325 17:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:22.325 17:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:22.325 17:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:22.582 null4 00:13:22.582 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:22.582 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:22.582 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:22.840 null5 00:13:22.840 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:22.840 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:22.840 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:23.097 null6 00:13:23.097 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:23.097 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:23.097 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:23.354 null7 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:23.354 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 891760 891761 891763 891765 891767 891769 891771 891773 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.355 17:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:23.612 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:23.612 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.612 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:23.612 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.612 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:23.612 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:23.612 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:23.612 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.870 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:24.127 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:24.127 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.127 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:24.128 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:24.128 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:24.128 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:24.128 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:24.128 17:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.385 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:24.642 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:24.642 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:24.642 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:24.642 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.642 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:24.642 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:24.642 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:24.642 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.899 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:25.157 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:25.157 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:25.157 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:25.157 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:25.157 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:25.157 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.157 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:25.157 17:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.415 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:25.672 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:25.672 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:25.672 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:25.672 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:25.672 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.672 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:25.672 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:25.672 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:25.944 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.945 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.945 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:26.203 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:26.203 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:26.203 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:26.203 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:26.203 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:26.460 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.460 17:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:26.460 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:26.460 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.460 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.460 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:26.460 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.460 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.460 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.718 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:26.976 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:26.976 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:26.976 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:26.976 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:26.976 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.976 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:26.976 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:26.976 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:27.233 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.234 17:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:27.491 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:27.491 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:27.491 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:27.492 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:27.492 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:27.492 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:27.492 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.492 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.749 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:28.011 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:28.011 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.011 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:28.011 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:28.011 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:28.011 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:28.011 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:28.011 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.284 17:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:28.545 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.545 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:28.545 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:28.545 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:28.545 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:28.545 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:28.545 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:28.545 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:28.802 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.802 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.802 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.802 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.802 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.802 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.802 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:28.803 rmmod nvme_tcp 00:13:28.803 rmmod nvme_fabrics 00:13:28.803 rmmod nvme_keyring 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 887419 ']' 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 887419 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 887419 ']' 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 887419 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 887419 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 887419' 00:13:28.803 killing process with pid 887419 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 887419 00:13:28.803 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 887419 00:13:29.060 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:29.060 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:29.061 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:29.061 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.061 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:29.061 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.061 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.061 17:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.586 17:49:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:31.586 00:13:31.586 real 0m46.204s 00:13:31.586 user 3m29.982s 00:13:31.586 sys 0m16.510s 00:13:31.586 17:49:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:31.586 17:49:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.586 ************************************ 00:13:31.586 END TEST nvmf_ns_hotplug_stress 00:13:31.586 ************************************ 00:13:31.586 17:49:05 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:31.586 17:49:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:31.586 17:49:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:31.586 17:49:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:31.586 ************************************ 00:13:31.586 START TEST nvmf_connect_stress 00:13:31.586 ************************************ 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:31.586 * Looking for test storage... 00:13:31.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:31.586 17:49:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:33.488 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:33.488 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:33.488 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:33.489 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:33.489 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:33.489 17:49:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:33.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:13:33.489 00:13:33.489 --- 10.0.0.2 ping statistics --- 00:13:33.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.489 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:13:33.489 00:13:33.489 --- 10.0.0.1 ping statistics --- 00:13:33.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.489 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=894517 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 894517 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 894517 ']' 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:33.489 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.489 [2024-07-20 17:49:08.130838] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:33.489 [2024-07-20 17:49:08.130912] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.489 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.489 [2024-07-20 17:49:08.199452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:33.748 [2024-07-20 17:49:08.289914] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.748 [2024-07-20 17:49:08.289971] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.748 [2024-07-20 17:49:08.289988] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.748 [2024-07-20 17:49:08.290002] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.748 [2024-07-20 17:49:08.290014] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.748 [2024-07-20 17:49:08.290111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.748 [2024-07-20 17:49:08.290137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.748 [2024-07-20 17:49:08.290140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.748 [2024-07-20 17:49:08.427529] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.748 [2024-07-20 17:49:08.455977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.748 NULL1 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=894653 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.748 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.314 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.314 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:34.314 17:49:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.314 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.314 17:49:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.572 17:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.572 17:49:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:34.572 17:49:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.572 17:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.572 17:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.830 17:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.830 17:49:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:34.830 17:49:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.830 17:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.830 17:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.106 17:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.106 17:49:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:35.106 17:49:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.106 17:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.106 17:49:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.364 17:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.364 17:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:35.364 17:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.364 17:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.364 17:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.927 17:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.927 17:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:35.927 17:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.927 17:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.927 17:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.184 17:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.184 17:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:36.184 17:49:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.184 17:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.184 17:49:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.440 17:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.440 17:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:36.440 17:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.440 17:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.440 17:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.697 17:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.697 17:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:36.697 17:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.697 17:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.697 17:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.954 17:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.954 17:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:36.954 17:49:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.954 17:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.954 17:49:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.518 17:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.518 17:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:37.518 17:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.518 17:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.518 17:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.775 17:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.775 17:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:37.775 17:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.775 17:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.775 17:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.032 17:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.032 17:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:38.032 17:49:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.032 17:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.032 17:49:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.290 17:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.290 17:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:38.290 17:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.290 17:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.290 17:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.546 17:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.546 17:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:38.546 17:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.546 17:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.546 17:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.109 17:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.109 17:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:39.109 17:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.109 17:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.109 17:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.365 17:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.365 17:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:39.365 17:49:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.365 17:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.365 17:49:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.621 17:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.621 17:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:39.621 17:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.621 17:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.621 17:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.877 17:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.877 17:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:39.877 17:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.877 17:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.877 17:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.441 17:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.441 17:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:40.441 17:49:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.441 17:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.441 17:49:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.697 17:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.697 17:49:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:40.697 17:49:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.697 17:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.697 17:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.953 17:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.953 17:49:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:40.953 17:49:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.953 17:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.953 17:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.210 17:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.210 17:49:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:41.210 17:49:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.210 17:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.210 17:49:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.539 17:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.539 17:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:41.539 17:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.539 17:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.539 17:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.796 17:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.796 17:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:41.797 17:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.797 17:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.797 17:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.359 17:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.359 17:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:42.359 17:49:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.359 17:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.359 17:49:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.616 17:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.616 17:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:42.616 17:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.616 17:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.616 17:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.873 17:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.873 17:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:42.873 17:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.873 17:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.873 17:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.131 17:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.131 17:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:43.131 17:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.131 17:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.131 17:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.388 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.388 17:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:43.388 17:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.388 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.388 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.952 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.952 17:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:43.952 17:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.952 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.952 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.952 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:44.209 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.209 17:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 894653 00:13:44.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (894653) - No such process 00:13:44.209 17:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 894653 00:13:44.209 17:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:44.209 17:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:44.209 17:49:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:44.209 17:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.209 17:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:44.209 17:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.209 17:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:44.209 17:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.209 17:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.209 rmmod nvme_tcp 00:13:44.209 rmmod nvme_fabrics 00:13:44.210 rmmod nvme_keyring 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 894517 ']' 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 894517 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 894517 ']' 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 894517 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 894517 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 894517' 00:13:44.210 killing process with pid 894517 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 894517 00:13:44.210 17:49:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 894517 00:13:44.467 17:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.467 17:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:44.467 17:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:44.467 17:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.467 17:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:44.467 17:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.467 17:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.467 17:49:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.992 17:49:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:46.992 00:13:46.992 real 0m15.278s 00:13:46.992 user 0m37.778s 00:13:46.992 sys 0m6.220s 00:13:46.992 17:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:46.992 17:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.992 ************************************ 00:13:46.992 END TEST nvmf_connect_stress 00:13:46.992 ************************************ 00:13:46.992 17:49:21 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:46.992 17:49:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:46.992 17:49:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:46.992 17:49:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:46.992 ************************************ 00:13:46.992 START TEST nvmf_fused_ordering 00:13:46.992 ************************************ 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:46.992 * Looking for test storage... 00:13:46.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:46.992 17:49:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:48.890 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:48.891 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:48.891 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:48.891 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:48.891 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:48.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:13:48.891 00:13:48.891 --- 10.0.0.2 ping statistics --- 00:13:48.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.891 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:13:48.891 00:13:48.891 --- 10.0.0.1 ping statistics --- 00:13:48.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.891 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=897801 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 897801 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 897801 ']' 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:48.891 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.891 [2024-07-20 17:49:23.446093] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:48.891 [2024-07-20 17:49:23.446181] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.891 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.891 [2024-07-20 17:49:23.514680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.891 [2024-07-20 17:49:23.603274] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.891 [2024-07-20 17:49:23.603338] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.891 [2024-07-20 17:49:23.603355] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.891 [2024-07-20 17:49:23.603369] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.891 [2024-07-20 17:49:23.603381] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.891 [2024-07-20 17:49:23.603413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.148 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:49.148 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:13:49.148 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.148 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.148 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.148 17:49:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.148 17:49:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.148 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.148 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.148 [2024-07-20 17:49:23.744222] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.148 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.149 [2024-07-20 17:49:23.760481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.149 NULL1 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.149 17:49:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:49.149 [2024-07-20 17:49:23.803611] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:49.149 [2024-07-20 17:49:23.803653] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid897824 ] 00:13:49.149 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.081 Attached to nqn.2016-06.io.spdk:cnode1 00:13:50.081 Namespace ID: 1 size: 1GB 00:13:50.081 fused_ordering(0) 00:13:50.081 fused_ordering(1) 00:13:50.081 fused_ordering(2) 00:13:50.081 fused_ordering(3) 00:13:50.081 fused_ordering(4) 00:13:50.081 fused_ordering(5) 00:13:50.081 fused_ordering(6) 00:13:50.081 fused_ordering(7) 00:13:50.081 fused_ordering(8) 00:13:50.081 fused_ordering(9) 00:13:50.081 fused_ordering(10) 00:13:50.081 fused_ordering(11) 00:13:50.081 fused_ordering(12) 00:13:50.081 fused_ordering(13) 00:13:50.081 fused_ordering(14) 00:13:50.081 fused_ordering(15) 00:13:50.081 fused_ordering(16) 00:13:50.081 fused_ordering(17) 00:13:50.081 fused_ordering(18) 00:13:50.081 fused_ordering(19) 00:13:50.081 fused_ordering(20) 00:13:50.081 fused_ordering(21) 00:13:50.081 fused_ordering(22) 00:13:50.081 fused_ordering(23) 00:13:50.081 fused_ordering(24) 00:13:50.081 fused_ordering(25) 00:13:50.081 fused_ordering(26) 00:13:50.081 fused_ordering(27) 00:13:50.081 fused_ordering(28) 00:13:50.081 fused_ordering(29) 00:13:50.081 fused_ordering(30) 00:13:50.081 fused_ordering(31) 00:13:50.081 fused_ordering(32) 00:13:50.081 fused_ordering(33) 00:13:50.081 fused_ordering(34) 00:13:50.081 fused_ordering(35) 00:13:50.081 fused_ordering(36) 00:13:50.081 fused_ordering(37) 00:13:50.081 fused_ordering(38) 00:13:50.081 fused_ordering(39) 00:13:50.081 fused_ordering(40) 00:13:50.081 fused_ordering(41) 00:13:50.081 fused_ordering(42) 00:13:50.081 fused_ordering(43) 00:13:50.081 fused_ordering(44) 00:13:50.081 fused_ordering(45) 00:13:50.081 fused_ordering(46) 00:13:50.081 fused_ordering(47) 00:13:50.081 fused_ordering(48) 00:13:50.081 fused_ordering(49) 00:13:50.081 fused_ordering(50) 00:13:50.081 fused_ordering(51) 00:13:50.081 fused_ordering(52) 00:13:50.081 fused_ordering(53) 00:13:50.081 fused_ordering(54) 00:13:50.081 fused_ordering(55) 00:13:50.081 fused_ordering(56) 00:13:50.081 fused_ordering(57) 00:13:50.081 fused_ordering(58) 00:13:50.081 fused_ordering(59) 00:13:50.081 fused_ordering(60) 00:13:50.081 fused_ordering(61) 00:13:50.081 fused_ordering(62) 00:13:50.081 fused_ordering(63) 00:13:50.081 fused_ordering(64) 00:13:50.081 fused_ordering(65) 00:13:50.081 fused_ordering(66) 00:13:50.081 fused_ordering(67) 00:13:50.081 fused_ordering(68) 00:13:50.081 fused_ordering(69) 00:13:50.081 fused_ordering(70) 00:13:50.081 fused_ordering(71) 00:13:50.081 fused_ordering(72) 00:13:50.081 fused_ordering(73) 00:13:50.081 fused_ordering(74) 00:13:50.081 fused_ordering(75) 00:13:50.081 fused_ordering(76) 00:13:50.081 fused_ordering(77) 00:13:50.081 fused_ordering(78) 00:13:50.081 fused_ordering(79) 00:13:50.081 fused_ordering(80) 00:13:50.081 fused_ordering(81) 00:13:50.081 fused_ordering(82) 00:13:50.081 fused_ordering(83) 00:13:50.081 fused_ordering(84) 00:13:50.081 fused_ordering(85) 00:13:50.081 fused_ordering(86) 00:13:50.081 fused_ordering(87) 00:13:50.081 fused_ordering(88) 00:13:50.081 fused_ordering(89) 00:13:50.081 fused_ordering(90) 00:13:50.081 fused_ordering(91) 00:13:50.081 fused_ordering(92) 00:13:50.081 fused_ordering(93) 00:13:50.081 fused_ordering(94) 00:13:50.081 fused_ordering(95) 00:13:50.081 fused_ordering(96) 00:13:50.081 fused_ordering(97) 00:13:50.081 fused_ordering(98) 00:13:50.081 fused_ordering(99) 00:13:50.081 fused_ordering(100) 00:13:50.081 fused_ordering(101) 00:13:50.081 fused_ordering(102) 00:13:50.081 fused_ordering(103) 00:13:50.081 fused_ordering(104) 00:13:50.081 fused_ordering(105) 00:13:50.081 fused_ordering(106) 00:13:50.081 fused_ordering(107) 00:13:50.081 fused_ordering(108) 00:13:50.081 fused_ordering(109) 00:13:50.081 fused_ordering(110) 00:13:50.081 fused_ordering(111) 00:13:50.081 fused_ordering(112) 00:13:50.081 fused_ordering(113) 00:13:50.081 fused_ordering(114) 00:13:50.081 fused_ordering(115) 00:13:50.081 fused_ordering(116) 00:13:50.081 fused_ordering(117) 00:13:50.081 fused_ordering(118) 00:13:50.081 fused_ordering(119) 00:13:50.081 fused_ordering(120) 00:13:50.081 fused_ordering(121) 00:13:50.081 fused_ordering(122) 00:13:50.081 fused_ordering(123) 00:13:50.081 fused_ordering(124) 00:13:50.081 fused_ordering(125) 00:13:50.081 fused_ordering(126) 00:13:50.081 fused_ordering(127) 00:13:50.081 fused_ordering(128) 00:13:50.081 fused_ordering(129) 00:13:50.081 fused_ordering(130) 00:13:50.081 fused_ordering(131) 00:13:50.081 fused_ordering(132) 00:13:50.081 fused_ordering(133) 00:13:50.081 fused_ordering(134) 00:13:50.081 fused_ordering(135) 00:13:50.081 fused_ordering(136) 00:13:50.081 fused_ordering(137) 00:13:50.081 fused_ordering(138) 00:13:50.081 fused_ordering(139) 00:13:50.081 fused_ordering(140) 00:13:50.081 fused_ordering(141) 00:13:50.081 fused_ordering(142) 00:13:50.081 fused_ordering(143) 00:13:50.081 fused_ordering(144) 00:13:50.081 fused_ordering(145) 00:13:50.081 fused_ordering(146) 00:13:50.081 fused_ordering(147) 00:13:50.081 fused_ordering(148) 00:13:50.081 fused_ordering(149) 00:13:50.081 fused_ordering(150) 00:13:50.081 fused_ordering(151) 00:13:50.081 fused_ordering(152) 00:13:50.081 fused_ordering(153) 00:13:50.081 fused_ordering(154) 00:13:50.081 fused_ordering(155) 00:13:50.081 fused_ordering(156) 00:13:50.081 fused_ordering(157) 00:13:50.081 fused_ordering(158) 00:13:50.081 fused_ordering(159) 00:13:50.081 fused_ordering(160) 00:13:50.081 fused_ordering(161) 00:13:50.081 fused_ordering(162) 00:13:50.081 fused_ordering(163) 00:13:50.081 fused_ordering(164) 00:13:50.081 fused_ordering(165) 00:13:50.081 fused_ordering(166) 00:13:50.081 fused_ordering(167) 00:13:50.081 fused_ordering(168) 00:13:50.081 fused_ordering(169) 00:13:50.081 fused_ordering(170) 00:13:50.081 fused_ordering(171) 00:13:50.081 fused_ordering(172) 00:13:50.081 fused_ordering(173) 00:13:50.081 fused_ordering(174) 00:13:50.081 fused_ordering(175) 00:13:50.081 fused_ordering(176) 00:13:50.081 fused_ordering(177) 00:13:50.081 fused_ordering(178) 00:13:50.081 fused_ordering(179) 00:13:50.081 fused_ordering(180) 00:13:50.081 fused_ordering(181) 00:13:50.081 fused_ordering(182) 00:13:50.081 fused_ordering(183) 00:13:50.081 fused_ordering(184) 00:13:50.081 fused_ordering(185) 00:13:50.081 fused_ordering(186) 00:13:50.081 fused_ordering(187) 00:13:50.081 fused_ordering(188) 00:13:50.081 fused_ordering(189) 00:13:50.081 fused_ordering(190) 00:13:50.081 fused_ordering(191) 00:13:50.081 fused_ordering(192) 00:13:50.081 fused_ordering(193) 00:13:50.081 fused_ordering(194) 00:13:50.081 fused_ordering(195) 00:13:50.081 fused_ordering(196) 00:13:50.081 fused_ordering(197) 00:13:50.081 fused_ordering(198) 00:13:50.081 fused_ordering(199) 00:13:50.081 fused_ordering(200) 00:13:50.081 fused_ordering(201) 00:13:50.081 fused_ordering(202) 00:13:50.081 fused_ordering(203) 00:13:50.081 fused_ordering(204) 00:13:50.081 fused_ordering(205) 00:13:51.467 fused_ordering(206) 00:13:51.467 fused_ordering(207) 00:13:51.467 fused_ordering(208) 00:13:51.467 fused_ordering(209) 00:13:51.467 fused_ordering(210) 00:13:51.467 fused_ordering(211) 00:13:51.467 fused_ordering(212) 00:13:51.467 fused_ordering(213) 00:13:51.467 fused_ordering(214) 00:13:51.467 fused_ordering(215) 00:13:51.467 fused_ordering(216) 00:13:51.467 fused_ordering(217) 00:13:51.467 fused_ordering(218) 00:13:51.467 fused_ordering(219) 00:13:51.467 fused_ordering(220) 00:13:51.467 fused_ordering(221) 00:13:51.467 fused_ordering(222) 00:13:51.467 fused_ordering(223) 00:13:51.467 fused_ordering(224) 00:13:51.467 fused_ordering(225) 00:13:51.467 fused_ordering(226) 00:13:51.467 fused_ordering(227) 00:13:51.467 fused_ordering(228) 00:13:51.467 fused_ordering(229) 00:13:51.467 fused_ordering(230) 00:13:51.467 fused_ordering(231) 00:13:51.467 fused_ordering(232) 00:13:51.467 fused_ordering(233) 00:13:51.467 fused_ordering(234) 00:13:51.467 fused_ordering(235) 00:13:51.467 fused_ordering(236) 00:13:51.467 fused_ordering(237) 00:13:51.467 fused_ordering(238) 00:13:51.467 fused_ordering(239) 00:13:51.467 fused_ordering(240) 00:13:51.467 fused_ordering(241) 00:13:51.467 fused_ordering(242) 00:13:51.467 fused_ordering(243) 00:13:51.467 fused_ordering(244) 00:13:51.467 fused_ordering(245) 00:13:51.467 fused_ordering(246) 00:13:51.467 fused_ordering(247) 00:13:51.467 fused_ordering(248) 00:13:51.467 fused_ordering(249) 00:13:51.467 fused_ordering(250) 00:13:51.467 fused_ordering(251) 00:13:51.467 fused_ordering(252) 00:13:51.467 fused_ordering(253) 00:13:51.467 fused_ordering(254) 00:13:51.467 fused_ordering(255) 00:13:51.467 fused_ordering(256) 00:13:51.467 fused_ordering(257) 00:13:51.467 fused_ordering(258) 00:13:51.467 fused_ordering(259) 00:13:51.467 fused_ordering(260) 00:13:51.467 fused_ordering(261) 00:13:51.467 fused_ordering(262) 00:13:51.467 fused_ordering(263) 00:13:51.467 fused_ordering(264) 00:13:51.467 fused_ordering(265) 00:13:51.467 fused_ordering(266) 00:13:51.467 fused_ordering(267) 00:13:51.467 fused_ordering(268) 00:13:51.467 fused_ordering(269) 00:13:51.467 fused_ordering(270) 00:13:51.467 fused_ordering(271) 00:13:51.467 fused_ordering(272) 00:13:51.467 fused_ordering(273) 00:13:51.467 fused_ordering(274) 00:13:51.467 fused_ordering(275) 00:13:51.467 fused_ordering(276) 00:13:51.467 fused_ordering(277) 00:13:51.467 fused_ordering(278) 00:13:51.467 fused_ordering(279) 00:13:51.467 fused_ordering(280) 00:13:51.467 fused_ordering(281) 00:13:51.467 fused_ordering(282) 00:13:51.467 fused_ordering(283) 00:13:51.467 fused_ordering(284) 00:13:51.467 fused_ordering(285) 00:13:51.467 fused_ordering(286) 00:13:51.467 fused_ordering(287) 00:13:51.467 fused_ordering(288) 00:13:51.467 fused_ordering(289) 00:13:51.467 fused_ordering(290) 00:13:51.468 fused_ordering(291) 00:13:51.468 fused_ordering(292) 00:13:51.468 fused_ordering(293) 00:13:51.468 fused_ordering(294) 00:13:51.468 fused_ordering(295) 00:13:51.468 fused_ordering(296) 00:13:51.468 fused_ordering(297) 00:13:51.468 fused_ordering(298) 00:13:51.468 fused_ordering(299) 00:13:51.468 fused_ordering(300) 00:13:51.468 fused_ordering(301) 00:13:51.468 fused_ordering(302) 00:13:51.468 fused_ordering(303) 00:13:51.468 fused_ordering(304) 00:13:51.468 fused_ordering(305) 00:13:51.468 fused_ordering(306) 00:13:51.468 fused_ordering(307) 00:13:51.468 fused_ordering(308) 00:13:51.468 fused_ordering(309) 00:13:51.468 fused_ordering(310) 00:13:51.468 fused_ordering(311) 00:13:51.468 fused_ordering(312) 00:13:51.468 fused_ordering(313) 00:13:51.468 fused_ordering(314) 00:13:51.468 fused_ordering(315) 00:13:51.468 fused_ordering(316) 00:13:51.468 fused_ordering(317) 00:13:51.468 fused_ordering(318) 00:13:51.468 fused_ordering(319) 00:13:51.468 fused_ordering(320) 00:13:51.468 fused_ordering(321) 00:13:51.468 fused_ordering(322) 00:13:51.468 fused_ordering(323) 00:13:51.468 fused_ordering(324) 00:13:51.468 fused_ordering(325) 00:13:51.468 fused_ordering(326) 00:13:51.468 fused_ordering(327) 00:13:51.468 fused_ordering(328) 00:13:51.468 fused_ordering(329) 00:13:51.468 fused_ordering(330) 00:13:51.468 fused_ordering(331) 00:13:51.468 fused_ordering(332) 00:13:51.468 fused_ordering(333) 00:13:51.468 fused_ordering(334) 00:13:51.468 fused_ordering(335) 00:13:51.468 fused_ordering(336) 00:13:51.468 fused_ordering(337) 00:13:51.468 fused_ordering(338) 00:13:51.468 fused_ordering(339) 00:13:51.468 fused_ordering(340) 00:13:51.468 fused_ordering(341) 00:13:51.468 fused_ordering(342) 00:13:51.468 fused_ordering(343) 00:13:51.468 fused_ordering(344) 00:13:51.468 fused_ordering(345) 00:13:51.468 fused_ordering(346) 00:13:51.468 fused_ordering(347) 00:13:51.468 fused_ordering(348) 00:13:51.468 fused_ordering(349) 00:13:51.468 fused_ordering(350) 00:13:51.468 fused_ordering(351) 00:13:51.468 fused_ordering(352) 00:13:51.468 fused_ordering(353) 00:13:51.468 fused_ordering(354) 00:13:51.468 fused_ordering(355) 00:13:51.468 fused_ordering(356) 00:13:51.468 fused_ordering(357) 00:13:51.468 fused_ordering(358) 00:13:51.468 fused_ordering(359) 00:13:51.468 fused_ordering(360) 00:13:51.468 fused_ordering(361) 00:13:51.468 fused_ordering(362) 00:13:51.468 fused_ordering(363) 00:13:51.468 fused_ordering(364) 00:13:51.468 fused_ordering(365) 00:13:51.468 fused_ordering(366) 00:13:51.468 fused_ordering(367) 00:13:51.468 fused_ordering(368) 00:13:51.468 fused_ordering(369) 00:13:51.468 fused_ordering(370) 00:13:51.468 fused_ordering(371) 00:13:51.468 fused_ordering(372) 00:13:51.468 fused_ordering(373) 00:13:51.468 fused_ordering(374) 00:13:51.468 fused_ordering(375) 00:13:51.468 fused_ordering(376) 00:13:51.468 fused_ordering(377) 00:13:51.468 fused_ordering(378) 00:13:51.468 fused_ordering(379) 00:13:51.468 fused_ordering(380) 00:13:51.468 fused_ordering(381) 00:13:51.468 fused_ordering(382) 00:13:51.468 fused_ordering(383) 00:13:51.468 fused_ordering(384) 00:13:51.468 fused_ordering(385) 00:13:51.468 fused_ordering(386) 00:13:51.468 fused_ordering(387) 00:13:51.468 fused_ordering(388) 00:13:51.468 fused_ordering(389) 00:13:51.468 fused_ordering(390) 00:13:51.468 fused_ordering(391) 00:13:51.468 fused_ordering(392) 00:13:51.468 fused_ordering(393) 00:13:51.468 fused_ordering(394) 00:13:51.468 fused_ordering(395) 00:13:51.468 fused_ordering(396) 00:13:51.468 fused_ordering(397) 00:13:51.468 fused_ordering(398) 00:13:51.468 fused_ordering(399) 00:13:51.468 fused_ordering(400) 00:13:51.468 fused_ordering(401) 00:13:51.468 fused_ordering(402) 00:13:51.468 fused_ordering(403) 00:13:51.468 fused_ordering(404) 00:13:51.468 fused_ordering(405) 00:13:51.468 fused_ordering(406) 00:13:51.468 fused_ordering(407) 00:13:51.468 fused_ordering(408) 00:13:51.468 fused_ordering(409) 00:13:51.468 fused_ordering(410) 00:13:52.401 fused_ordering(411) 00:13:52.401 fused_ordering(412) 00:13:52.401 fused_ordering(413) 00:13:52.401 fused_ordering(414) 00:13:52.401 fused_ordering(415) 00:13:52.401 fused_ordering(416) 00:13:52.401 fused_ordering(417) 00:13:52.401 fused_ordering(418) 00:13:52.401 fused_ordering(419) 00:13:52.401 fused_ordering(420) 00:13:52.401 fused_ordering(421) 00:13:52.401 fused_ordering(422) 00:13:52.401 fused_ordering(423) 00:13:52.401 fused_ordering(424) 00:13:52.401 fused_ordering(425) 00:13:52.401 fused_ordering(426) 00:13:52.401 fused_ordering(427) 00:13:52.401 fused_ordering(428) 00:13:52.401 fused_ordering(429) 00:13:52.401 fused_ordering(430) 00:13:52.401 fused_ordering(431) 00:13:52.401 fused_ordering(432) 00:13:52.401 fused_ordering(433) 00:13:52.401 fused_ordering(434) 00:13:52.401 fused_ordering(435) 00:13:52.401 fused_ordering(436) 00:13:52.401 fused_ordering(437) 00:13:52.401 fused_ordering(438) 00:13:52.401 fused_ordering(439) 00:13:52.401 fused_ordering(440) 00:13:52.401 fused_ordering(441) 00:13:52.401 fused_ordering(442) 00:13:52.401 fused_ordering(443) 00:13:52.401 fused_ordering(444) 00:13:52.401 fused_ordering(445) 00:13:52.401 fused_ordering(446) 00:13:52.401 fused_ordering(447) 00:13:52.401 fused_ordering(448) 00:13:52.401 fused_ordering(449) 00:13:52.401 fused_ordering(450) 00:13:52.401 fused_ordering(451) 00:13:52.401 fused_ordering(452) 00:13:52.401 fused_ordering(453) 00:13:52.401 fused_ordering(454) 00:13:52.401 fused_ordering(455) 00:13:52.401 fused_ordering(456) 00:13:52.401 fused_ordering(457) 00:13:52.401 fused_ordering(458) 00:13:52.401 fused_ordering(459) 00:13:52.401 fused_ordering(460) 00:13:52.401 fused_ordering(461) 00:13:52.401 fused_ordering(462) 00:13:52.401 fused_ordering(463) 00:13:52.401 fused_ordering(464) 00:13:52.401 fused_ordering(465) 00:13:52.401 fused_ordering(466) 00:13:52.401 fused_ordering(467) 00:13:52.401 fused_ordering(468) 00:13:52.401 fused_ordering(469) 00:13:52.401 fused_ordering(470) 00:13:52.401 fused_ordering(471) 00:13:52.401 fused_ordering(472) 00:13:52.401 fused_ordering(473) 00:13:52.401 fused_ordering(474) 00:13:52.401 fused_ordering(475) 00:13:52.401 fused_ordering(476) 00:13:52.401 fused_ordering(477) 00:13:52.401 fused_ordering(478) 00:13:52.401 fused_ordering(479) 00:13:52.401 fused_ordering(480) 00:13:52.401 fused_ordering(481) 00:13:52.401 fused_ordering(482) 00:13:52.401 fused_ordering(483) 00:13:52.401 fused_ordering(484) 00:13:52.401 fused_ordering(485) 00:13:52.401 fused_ordering(486) 00:13:52.401 fused_ordering(487) 00:13:52.401 fused_ordering(488) 00:13:52.401 fused_ordering(489) 00:13:52.401 fused_ordering(490) 00:13:52.401 fused_ordering(491) 00:13:52.401 fused_ordering(492) 00:13:52.401 fused_ordering(493) 00:13:52.401 fused_ordering(494) 00:13:52.401 fused_ordering(495) 00:13:52.401 fused_ordering(496) 00:13:52.401 fused_ordering(497) 00:13:52.401 fused_ordering(498) 00:13:52.401 fused_ordering(499) 00:13:52.401 fused_ordering(500) 00:13:52.401 fused_ordering(501) 00:13:52.401 fused_ordering(502) 00:13:52.401 fused_ordering(503) 00:13:52.401 fused_ordering(504) 00:13:52.401 fused_ordering(505) 00:13:52.401 fused_ordering(506) 00:13:52.401 fused_ordering(507) 00:13:52.401 fused_ordering(508) 00:13:52.401 fused_ordering(509) 00:13:52.401 fused_ordering(510) 00:13:52.401 fused_ordering(511) 00:13:52.401 fused_ordering(512) 00:13:52.401 fused_ordering(513) 00:13:52.401 fused_ordering(514) 00:13:52.401 fused_ordering(515) 00:13:52.401 fused_ordering(516) 00:13:52.401 fused_ordering(517) 00:13:52.401 fused_ordering(518) 00:13:52.401 fused_ordering(519) 00:13:52.401 fused_ordering(520) 00:13:52.401 fused_ordering(521) 00:13:52.401 fused_ordering(522) 00:13:52.401 fused_ordering(523) 00:13:52.401 fused_ordering(524) 00:13:52.401 fused_ordering(525) 00:13:52.401 fused_ordering(526) 00:13:52.401 fused_ordering(527) 00:13:52.401 fused_ordering(528) 00:13:52.401 fused_ordering(529) 00:13:52.401 fused_ordering(530) 00:13:52.401 fused_ordering(531) 00:13:52.401 fused_ordering(532) 00:13:52.401 fused_ordering(533) 00:13:52.401 fused_ordering(534) 00:13:52.401 fused_ordering(535) 00:13:52.401 fused_ordering(536) 00:13:52.401 fused_ordering(537) 00:13:52.401 fused_ordering(538) 00:13:52.401 fused_ordering(539) 00:13:52.401 fused_ordering(540) 00:13:52.401 fused_ordering(541) 00:13:52.401 fused_ordering(542) 00:13:52.401 fused_ordering(543) 00:13:52.401 fused_ordering(544) 00:13:52.401 fused_ordering(545) 00:13:52.401 fused_ordering(546) 00:13:52.401 fused_ordering(547) 00:13:52.401 fused_ordering(548) 00:13:52.401 fused_ordering(549) 00:13:52.401 fused_ordering(550) 00:13:52.401 fused_ordering(551) 00:13:52.401 fused_ordering(552) 00:13:52.401 fused_ordering(553) 00:13:52.401 fused_ordering(554) 00:13:52.401 fused_ordering(555) 00:13:52.401 fused_ordering(556) 00:13:52.401 fused_ordering(557) 00:13:52.401 fused_ordering(558) 00:13:52.401 fused_ordering(559) 00:13:52.402 fused_ordering(560) 00:13:52.402 fused_ordering(561) 00:13:52.402 fused_ordering(562) 00:13:52.402 fused_ordering(563) 00:13:52.402 fused_ordering(564) 00:13:52.402 fused_ordering(565) 00:13:52.402 fused_ordering(566) 00:13:52.402 fused_ordering(567) 00:13:52.402 fused_ordering(568) 00:13:52.402 fused_ordering(569) 00:13:52.402 fused_ordering(570) 00:13:52.402 fused_ordering(571) 00:13:52.402 fused_ordering(572) 00:13:52.402 fused_ordering(573) 00:13:52.402 fused_ordering(574) 00:13:52.402 fused_ordering(575) 00:13:52.402 fused_ordering(576) 00:13:52.402 fused_ordering(577) 00:13:52.402 fused_ordering(578) 00:13:52.402 fused_ordering(579) 00:13:52.402 fused_ordering(580) 00:13:52.402 fused_ordering(581) 00:13:52.402 fused_ordering(582) 00:13:52.402 fused_ordering(583) 00:13:52.402 fused_ordering(584) 00:13:52.402 fused_ordering(585) 00:13:52.402 fused_ordering(586) 00:13:52.402 fused_ordering(587) 00:13:52.402 fused_ordering(588) 00:13:52.402 fused_ordering(589) 00:13:52.402 fused_ordering(590) 00:13:52.402 fused_ordering(591) 00:13:52.402 fused_ordering(592) 00:13:52.402 fused_ordering(593) 00:13:52.402 fused_ordering(594) 00:13:52.402 fused_ordering(595) 00:13:52.402 fused_ordering(596) 00:13:52.402 fused_ordering(597) 00:13:52.402 fused_ordering(598) 00:13:52.402 fused_ordering(599) 00:13:52.402 fused_ordering(600) 00:13:52.402 fused_ordering(601) 00:13:52.402 fused_ordering(602) 00:13:52.402 fused_ordering(603) 00:13:52.402 fused_ordering(604) 00:13:52.402 fused_ordering(605) 00:13:52.402 fused_ordering(606) 00:13:52.402 fused_ordering(607) 00:13:52.402 fused_ordering(608) 00:13:52.402 fused_ordering(609) 00:13:52.402 fused_ordering(610) 00:13:52.402 fused_ordering(611) 00:13:52.402 fused_ordering(612) 00:13:52.402 fused_ordering(613) 00:13:52.402 fused_ordering(614) 00:13:52.402 fused_ordering(615) 00:13:53.335 fused_ordering(616) 00:13:53.335 fused_ordering(617) 00:13:53.335 fused_ordering(618) 00:13:53.335 fused_ordering(619) 00:13:53.335 fused_ordering(620) 00:13:53.335 fused_ordering(621) 00:13:53.335 fused_ordering(622) 00:13:53.335 fused_ordering(623) 00:13:53.335 fused_ordering(624) 00:13:53.335 fused_ordering(625) 00:13:53.335 fused_ordering(626) 00:13:53.335 fused_ordering(627) 00:13:53.335 fused_ordering(628) 00:13:53.335 fused_ordering(629) 00:13:53.335 fused_ordering(630) 00:13:53.335 fused_ordering(631) 00:13:53.335 fused_ordering(632) 00:13:53.335 fused_ordering(633) 00:13:53.335 fused_ordering(634) 00:13:53.335 fused_ordering(635) 00:13:53.335 fused_ordering(636) 00:13:53.335 fused_ordering(637) 00:13:53.335 fused_ordering(638) 00:13:53.335 fused_ordering(639) 00:13:53.335 fused_ordering(640) 00:13:53.335 fused_ordering(641) 00:13:53.335 fused_ordering(642) 00:13:53.335 fused_ordering(643) 00:13:53.335 fused_ordering(644) 00:13:53.335 fused_ordering(645) 00:13:53.335 fused_ordering(646) 00:13:53.335 fused_ordering(647) 00:13:53.335 fused_ordering(648) 00:13:53.335 fused_ordering(649) 00:13:53.335 fused_ordering(650) 00:13:53.335 fused_ordering(651) 00:13:53.335 fused_ordering(652) 00:13:53.335 fused_ordering(653) 00:13:53.335 fused_ordering(654) 00:13:53.335 fused_ordering(655) 00:13:53.335 fused_ordering(656) 00:13:53.335 fused_ordering(657) 00:13:53.335 fused_ordering(658) 00:13:53.335 fused_ordering(659) 00:13:53.335 fused_ordering(660) 00:13:53.335 fused_ordering(661) 00:13:53.335 fused_ordering(662) 00:13:53.335 fused_ordering(663) 00:13:53.335 fused_ordering(664) 00:13:53.335 fused_ordering(665) 00:13:53.335 fused_ordering(666) 00:13:53.335 fused_ordering(667) 00:13:53.335 fused_ordering(668) 00:13:53.335 fused_ordering(669) 00:13:53.335 fused_ordering(670) 00:13:53.335 fused_ordering(671) 00:13:53.335 fused_ordering(672) 00:13:53.335 fused_ordering(673) 00:13:53.335 fused_ordering(674) 00:13:53.335 fused_ordering(675) 00:13:53.335 fused_ordering(676) 00:13:53.335 fused_ordering(677) 00:13:53.335 fused_ordering(678) 00:13:53.335 fused_ordering(679) 00:13:53.335 fused_ordering(680) 00:13:53.335 fused_ordering(681) 00:13:53.335 fused_ordering(682) 00:13:53.335 fused_ordering(683) 00:13:53.335 fused_ordering(684) 00:13:53.335 fused_ordering(685) 00:13:53.335 fused_ordering(686) 00:13:53.335 fused_ordering(687) 00:13:53.335 fused_ordering(688) 00:13:53.335 fused_ordering(689) 00:13:53.335 fused_ordering(690) 00:13:53.335 fused_ordering(691) 00:13:53.335 fused_ordering(692) 00:13:53.335 fused_ordering(693) 00:13:53.335 fused_ordering(694) 00:13:53.335 fused_ordering(695) 00:13:53.335 fused_ordering(696) 00:13:53.335 fused_ordering(697) 00:13:53.335 fused_ordering(698) 00:13:53.335 fused_ordering(699) 00:13:53.335 fused_ordering(700) 00:13:53.335 fused_ordering(701) 00:13:53.335 fused_ordering(702) 00:13:53.335 fused_ordering(703) 00:13:53.335 fused_ordering(704) 00:13:53.335 fused_ordering(705) 00:13:53.335 fused_ordering(706) 00:13:53.335 fused_ordering(707) 00:13:53.335 fused_ordering(708) 00:13:53.335 fused_ordering(709) 00:13:53.335 fused_ordering(710) 00:13:53.335 fused_ordering(711) 00:13:53.335 fused_ordering(712) 00:13:53.335 fused_ordering(713) 00:13:53.335 fused_ordering(714) 00:13:53.335 fused_ordering(715) 00:13:53.335 fused_ordering(716) 00:13:53.335 fused_ordering(717) 00:13:53.335 fused_ordering(718) 00:13:53.335 fused_ordering(719) 00:13:53.335 fused_ordering(720) 00:13:53.335 fused_ordering(721) 00:13:53.335 fused_ordering(722) 00:13:53.335 fused_ordering(723) 00:13:53.335 fused_ordering(724) 00:13:53.335 fused_ordering(725) 00:13:53.335 fused_ordering(726) 00:13:53.335 fused_ordering(727) 00:13:53.335 fused_ordering(728) 00:13:53.335 fused_ordering(729) 00:13:53.335 fused_ordering(730) 00:13:53.335 fused_ordering(731) 00:13:53.335 fused_ordering(732) 00:13:53.335 fused_ordering(733) 00:13:53.335 fused_ordering(734) 00:13:53.335 fused_ordering(735) 00:13:53.335 fused_ordering(736) 00:13:53.335 fused_ordering(737) 00:13:53.335 fused_ordering(738) 00:13:53.335 fused_ordering(739) 00:13:53.335 fused_ordering(740) 00:13:53.335 fused_ordering(741) 00:13:53.335 fused_ordering(742) 00:13:53.335 fused_ordering(743) 00:13:53.335 fused_ordering(744) 00:13:53.335 fused_ordering(745) 00:13:53.335 fused_ordering(746) 00:13:53.335 fused_ordering(747) 00:13:53.335 fused_ordering(748) 00:13:53.335 fused_ordering(749) 00:13:53.335 fused_ordering(750) 00:13:53.335 fused_ordering(751) 00:13:53.335 fused_ordering(752) 00:13:53.335 fused_ordering(753) 00:13:53.336 fused_ordering(754) 00:13:53.336 fused_ordering(755) 00:13:53.336 fused_ordering(756) 00:13:53.336 fused_ordering(757) 00:13:53.336 fused_ordering(758) 00:13:53.336 fused_ordering(759) 00:13:53.336 fused_ordering(760) 00:13:53.336 fused_ordering(761) 00:13:53.336 fused_ordering(762) 00:13:53.336 fused_ordering(763) 00:13:53.336 fused_ordering(764) 00:13:53.336 fused_ordering(765) 00:13:53.336 fused_ordering(766) 00:13:53.336 fused_ordering(767) 00:13:53.336 fused_ordering(768) 00:13:53.336 fused_ordering(769) 00:13:53.336 fused_ordering(770) 00:13:53.336 fused_ordering(771) 00:13:53.336 fused_ordering(772) 00:13:53.336 fused_ordering(773) 00:13:53.336 fused_ordering(774) 00:13:53.336 fused_ordering(775) 00:13:53.336 fused_ordering(776) 00:13:53.336 fused_ordering(777) 00:13:53.336 fused_ordering(778) 00:13:53.336 fused_ordering(779) 00:13:53.336 fused_ordering(780) 00:13:53.336 fused_ordering(781) 00:13:53.336 fused_ordering(782) 00:13:53.336 fused_ordering(783) 00:13:53.336 fused_ordering(784) 00:13:53.336 fused_ordering(785) 00:13:53.336 fused_ordering(786) 00:13:53.336 fused_ordering(787) 00:13:53.336 fused_ordering(788) 00:13:53.336 fused_ordering(789) 00:13:53.336 fused_ordering(790) 00:13:53.336 fused_ordering(791) 00:13:53.336 fused_ordering(792) 00:13:53.336 fused_ordering(793) 00:13:53.336 fused_ordering(794) 00:13:53.336 fused_ordering(795) 00:13:53.336 fused_ordering(796) 00:13:53.336 fused_ordering(797) 00:13:53.336 fused_ordering(798) 00:13:53.336 fused_ordering(799) 00:13:53.336 fused_ordering(800) 00:13:53.336 fused_ordering(801) 00:13:53.336 fused_ordering(802) 00:13:53.336 fused_ordering(803) 00:13:53.336 fused_ordering(804) 00:13:53.336 fused_ordering(805) 00:13:53.336 fused_ordering(806) 00:13:53.336 fused_ordering(807) 00:13:53.336 fused_ordering(808) 00:13:53.336 fused_ordering(809) 00:13:53.336 fused_ordering(810) 00:13:53.336 fused_ordering(811) 00:13:53.336 fused_ordering(812) 00:13:53.336 fused_ordering(813) 00:13:53.336 fused_ordering(814) 00:13:53.336 fused_ordering(815) 00:13:53.336 fused_ordering(816) 00:13:53.336 fused_ordering(817) 00:13:53.336 fused_ordering(818) 00:13:53.336 fused_ordering(819) 00:13:53.336 fused_ordering(820) 00:13:54.268 fused_ordering(821) 00:13:54.268 fused_ordering(822) 00:13:54.268 fused_ordering(823) 00:13:54.268 fused_ordering(824) 00:13:54.268 fused_ordering(825) 00:13:54.268 fused_ordering(826) 00:13:54.268 fused_ordering(827) 00:13:54.268 fused_ordering(828) 00:13:54.268 fused_ordering(829) 00:13:54.268 fused_ordering(830) 00:13:54.268 fused_ordering(831) 00:13:54.268 fused_ordering(832) 00:13:54.268 fused_ordering(833) 00:13:54.268 fused_ordering(834) 00:13:54.268 fused_ordering(835) 00:13:54.268 fused_ordering(836) 00:13:54.268 fused_ordering(837) 00:13:54.268 fused_ordering(838) 00:13:54.268 fused_ordering(839) 00:13:54.268 fused_ordering(840) 00:13:54.268 fused_ordering(841) 00:13:54.268 fused_ordering(842) 00:13:54.268 fused_ordering(843) 00:13:54.268 fused_ordering(844) 00:13:54.268 fused_ordering(845) 00:13:54.268 fused_ordering(846) 00:13:54.268 fused_ordering(847) 00:13:54.268 fused_ordering(848) 00:13:54.268 fused_ordering(849) 00:13:54.268 fused_ordering(850) 00:13:54.268 fused_ordering(851) 00:13:54.268 fused_ordering(852) 00:13:54.268 fused_ordering(853) 00:13:54.268 fused_ordering(854) 00:13:54.268 fused_ordering(855) 00:13:54.268 fused_ordering(856) 00:13:54.268 fused_ordering(857) 00:13:54.268 fused_ordering(858) 00:13:54.268 fused_ordering(859) 00:13:54.268 fused_ordering(860) 00:13:54.268 fused_ordering(861) 00:13:54.268 fused_ordering(862) 00:13:54.268 fused_ordering(863) 00:13:54.268 fused_ordering(864) 00:13:54.268 fused_ordering(865) 00:13:54.268 fused_ordering(866) 00:13:54.268 fused_ordering(867) 00:13:54.268 fused_ordering(868) 00:13:54.268 fused_ordering(869) 00:13:54.268 fused_ordering(870) 00:13:54.268 fused_ordering(871) 00:13:54.268 fused_ordering(872) 00:13:54.268 fused_ordering(873) 00:13:54.268 fused_ordering(874) 00:13:54.268 fused_ordering(875) 00:13:54.268 fused_ordering(876) 00:13:54.268 fused_ordering(877) 00:13:54.268 fused_ordering(878) 00:13:54.268 fused_ordering(879) 00:13:54.268 fused_ordering(880) 00:13:54.268 fused_ordering(881) 00:13:54.268 fused_ordering(882) 00:13:54.268 fused_ordering(883) 00:13:54.268 fused_ordering(884) 00:13:54.268 fused_ordering(885) 00:13:54.268 fused_ordering(886) 00:13:54.268 fused_ordering(887) 00:13:54.268 fused_ordering(888) 00:13:54.268 fused_ordering(889) 00:13:54.268 fused_ordering(890) 00:13:54.268 fused_ordering(891) 00:13:54.268 fused_ordering(892) 00:13:54.268 fused_ordering(893) 00:13:54.268 fused_ordering(894) 00:13:54.268 fused_ordering(895) 00:13:54.268 fused_ordering(896) 00:13:54.268 fused_ordering(897) 00:13:54.268 fused_ordering(898) 00:13:54.268 fused_ordering(899) 00:13:54.268 fused_ordering(900) 00:13:54.268 fused_ordering(901) 00:13:54.268 fused_ordering(902) 00:13:54.268 fused_ordering(903) 00:13:54.268 fused_ordering(904) 00:13:54.268 fused_ordering(905) 00:13:54.268 fused_ordering(906) 00:13:54.268 fused_ordering(907) 00:13:54.269 fused_ordering(908) 00:13:54.269 fused_ordering(909) 00:13:54.269 fused_ordering(910) 00:13:54.269 fused_ordering(911) 00:13:54.269 fused_ordering(912) 00:13:54.269 fused_ordering(913) 00:13:54.269 fused_ordering(914) 00:13:54.269 fused_ordering(915) 00:13:54.269 fused_ordering(916) 00:13:54.269 fused_ordering(917) 00:13:54.269 fused_ordering(918) 00:13:54.269 fused_ordering(919) 00:13:54.269 fused_ordering(920) 00:13:54.269 fused_ordering(921) 00:13:54.269 fused_ordering(922) 00:13:54.269 fused_ordering(923) 00:13:54.269 fused_ordering(924) 00:13:54.269 fused_ordering(925) 00:13:54.269 fused_ordering(926) 00:13:54.269 fused_ordering(927) 00:13:54.269 fused_ordering(928) 00:13:54.269 fused_ordering(929) 00:13:54.269 fused_ordering(930) 00:13:54.269 fused_ordering(931) 00:13:54.269 fused_ordering(932) 00:13:54.269 fused_ordering(933) 00:13:54.269 fused_ordering(934) 00:13:54.269 fused_ordering(935) 00:13:54.269 fused_ordering(936) 00:13:54.269 fused_ordering(937) 00:13:54.269 fused_ordering(938) 00:13:54.269 fused_ordering(939) 00:13:54.269 fused_ordering(940) 00:13:54.269 fused_ordering(941) 00:13:54.269 fused_ordering(942) 00:13:54.269 fused_ordering(943) 00:13:54.269 fused_ordering(944) 00:13:54.269 fused_ordering(945) 00:13:54.269 fused_ordering(946) 00:13:54.269 fused_ordering(947) 00:13:54.269 fused_ordering(948) 00:13:54.269 fused_ordering(949) 00:13:54.269 fused_ordering(950) 00:13:54.269 fused_ordering(951) 00:13:54.269 fused_ordering(952) 00:13:54.269 fused_ordering(953) 00:13:54.269 fused_ordering(954) 00:13:54.269 fused_ordering(955) 00:13:54.269 fused_ordering(956) 00:13:54.269 fused_ordering(957) 00:13:54.269 fused_ordering(958) 00:13:54.269 fused_ordering(959) 00:13:54.269 fused_ordering(960) 00:13:54.269 fused_ordering(961) 00:13:54.269 fused_ordering(962) 00:13:54.269 fused_ordering(963) 00:13:54.269 fused_ordering(964) 00:13:54.269 fused_ordering(965) 00:13:54.269 fused_ordering(966) 00:13:54.269 fused_ordering(967) 00:13:54.269 fused_ordering(968) 00:13:54.269 fused_ordering(969) 00:13:54.269 fused_ordering(970) 00:13:54.269 fused_ordering(971) 00:13:54.269 fused_ordering(972) 00:13:54.269 fused_ordering(973) 00:13:54.269 fused_ordering(974) 00:13:54.269 fused_ordering(975) 00:13:54.269 fused_ordering(976) 00:13:54.269 fused_ordering(977) 00:13:54.269 fused_ordering(978) 00:13:54.269 fused_ordering(979) 00:13:54.269 fused_ordering(980) 00:13:54.269 fused_ordering(981) 00:13:54.269 fused_ordering(982) 00:13:54.269 fused_ordering(983) 00:13:54.269 fused_ordering(984) 00:13:54.269 fused_ordering(985) 00:13:54.269 fused_ordering(986) 00:13:54.269 fused_ordering(987) 00:13:54.269 fused_ordering(988) 00:13:54.269 fused_ordering(989) 00:13:54.269 fused_ordering(990) 00:13:54.269 fused_ordering(991) 00:13:54.269 fused_ordering(992) 00:13:54.269 fused_ordering(993) 00:13:54.269 fused_ordering(994) 00:13:54.269 fused_ordering(995) 00:13:54.269 fused_ordering(996) 00:13:54.269 fused_ordering(997) 00:13:54.269 fused_ordering(998) 00:13:54.269 fused_ordering(999) 00:13:54.269 fused_ordering(1000) 00:13:54.269 fused_ordering(1001) 00:13:54.269 fused_ordering(1002) 00:13:54.269 fused_ordering(1003) 00:13:54.269 fused_ordering(1004) 00:13:54.269 fused_ordering(1005) 00:13:54.269 fused_ordering(1006) 00:13:54.269 fused_ordering(1007) 00:13:54.269 fused_ordering(1008) 00:13:54.269 fused_ordering(1009) 00:13:54.269 fused_ordering(1010) 00:13:54.269 fused_ordering(1011) 00:13:54.269 fused_ordering(1012) 00:13:54.269 fused_ordering(1013) 00:13:54.269 fused_ordering(1014) 00:13:54.269 fused_ordering(1015) 00:13:54.269 fused_ordering(1016) 00:13:54.269 fused_ordering(1017) 00:13:54.269 fused_ordering(1018) 00:13:54.269 fused_ordering(1019) 00:13:54.269 fused_ordering(1020) 00:13:54.269 fused_ordering(1021) 00:13:54.269 fused_ordering(1022) 00:13:54.269 fused_ordering(1023) 00:13:54.269 17:49:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:54.269 17:49:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:54.269 17:49:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:54.269 17:49:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:54.269 17:49:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:54.269 17:49:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:54.269 17:49:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.269 17:49:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:54.269 rmmod nvme_tcp 00:13:54.269 rmmod nvme_fabrics 00:13:54.269 rmmod nvme_keyring 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 897801 ']' 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 897801 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 897801 ']' 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 897801 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 897801 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 897801' 00:13:54.269 killing process with pid 897801 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 897801 00:13:54.269 17:49:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 897801 00:13:54.526 17:49:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.526 17:49:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.526 17:49:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.526 17:49:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.527 17:49:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.527 17:49:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.527 17:49:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.527 17:49:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.057 17:49:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:57.058 00:13:57.058 real 0m10.103s 00:13:57.058 user 0m8.065s 00:13:57.058 sys 0m5.299s 00:13:57.058 17:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:57.058 17:49:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:57.058 ************************************ 00:13:57.058 END TEST nvmf_fused_ordering 00:13:57.058 ************************************ 00:13:57.058 17:49:31 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:57.058 17:49:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:57.058 17:49:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:57.058 17:49:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:57.058 ************************************ 00:13:57.058 START TEST nvmf_delete_subsystem 00:13:57.058 ************************************ 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:57.058 * Looking for test storage... 00:13:57.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:57.058 17:49:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:58.955 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:58.955 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.955 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:58.956 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:58.956 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:58.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:13:58.956 00:13:58.956 --- 10.0.0.2 ping statistics --- 00:13:58.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.956 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:13:58.956 00:13:58.956 --- 10.0.0.1 ping statistics --- 00:13:58.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.956 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=900412 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 900412 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 900412 ']' 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:58.956 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:58.956 [2024-07-20 17:49:33.538240] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:58.956 [2024-07-20 17:49:33.538335] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.956 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.956 [2024-07-20 17:49:33.603813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:58.956 [2024-07-20 17:49:33.692172] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.956 [2024-07-20 17:49:33.692231] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.956 [2024-07-20 17:49:33.692259] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.956 [2024-07-20 17:49:33.692271] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.956 [2024-07-20 17:49:33.692280] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.956 [2024-07-20 17:49:33.692365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.956 [2024-07-20 17:49:33.692370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.212 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:59.213 [2024-07-20 17:49:33.839626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:59.213 [2024-07-20 17:49:33.855943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:59.213 NULL1 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:59.213 Delay0 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=900432 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:59.213 17:49:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:59.213 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.213 [2024-07-20 17:49:33.930656] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:01.105 17:49:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:01.105 17:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.105 17:49:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 starting I/O failed: -6 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 starting I/O failed: -6 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 starting I/O failed: -6 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 starting I/O failed: -6 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 starting I/O failed: -6 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 starting I/O failed: -6 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 starting I/O failed: -6 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 starting I/O failed: -6 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 starting I/O failed: -6 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 [2024-07-20 17:49:36.102344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4ac4000c00 is same with the state(5) to be set 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Write completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.362 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Read completed with error (sct=0, sc=8) 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 Write completed with error (sct=0, sc=8) 00:14:01.363 starting I/O failed: -6 00:14:01.363 starting I/O failed: -6 00:14:01.363 starting I/O failed: -6 00:14:01.363 starting I/O failed: -6 00:14:01.363 starting I/O failed: -6 00:14:01.363 starting I/O failed: -6 00:14:01.363 starting I/O failed: -6 00:14:01.363 starting I/O failed: -6 00:14:01.363 starting I/O failed: -6 00:14:01.363 starting I/O failed: -6 00:14:02.293 [2024-07-20 17:49:37.072377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d8b0 is same with the state(5) to be set 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 [2024-07-20 17:49:37.097168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119a360 is same with the state(5) to be set 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 [2024-07-20 17:49:37.097435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119aaa0 is same with the state(5) to be set 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 [2024-07-20 17:49:37.102734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4ac400c600 is same with the state(5) to be set 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Write completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.549 Read completed with error (sct=0, sc=8) 00:14:02.550 Read completed with error (sct=0, sc=8) 00:14:02.550 Read completed with error (sct=0, sc=8) 00:14:02.550 Read completed with error (sct=0, sc=8) 00:14:02.550 Read completed with error (sct=0, sc=8) 00:14:02.550 Write completed with error (sct=0, sc=8) 00:14:02.550 Read completed with error (sct=0, sc=8) 00:14:02.550 Read completed with error (sct=0, sc=8) 00:14:02.550 Write completed with error (sct=0, sc=8) 00:14:02.550 [2024-07-20 17:49:37.106245] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4ac400bfe0 is same with the state(5) to be set 00:14:02.550 Initializing NVMe Controllers 00:14:02.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:02.550 Controller IO queue size 128, less than required. 00:14:02.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:02.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:02.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:02.550 Initialization complete. Launching workers. 00:14:02.550 ======================================================== 00:14:02.550 Latency(us) 00:14:02.550 Device Information : IOPS MiB/s Average min max 00:14:02.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 182.06 0.09 921816.25 617.82 1992850.57 00:14:02.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.28 0.08 932097.31 565.30 1011098.37 00:14:02.550 ======================================================== 00:14:02.550 Total : 336.35 0.16 926532.19 565.30 1992850.57 00:14:02.550 00:14:02.550 [2024-07-20 17:49:37.106674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d8b0 (9): Bad file descriptor 00:14:02.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:02.550 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.550 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:02.550 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 900432 00:14:02.550 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:03.112 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 900432 00:14:03.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (900432) - No such process 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 900432 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 900432 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 900432 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:03.113 [2024-07-20 17:49:37.626198] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=900844 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 900844 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:03.113 17:49:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:03.113 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.113 [2024-07-20 17:49:37.682985] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:03.370 17:49:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:03.370 17:49:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 900844 00:14:03.370 17:49:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:03.934 17:49:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:03.934 17:49:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 900844 00:14:03.934 17:49:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:04.497 17:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:04.497 17:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 900844 00:14:04.497 17:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:05.060 17:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:05.060 17:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 900844 00:14:05.060 17:49:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:05.640 17:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:05.640 17:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 900844 00:14:05.640 17:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:05.896 17:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:05.896 17:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 900844 00:14:05.896 17:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:06.154 Initializing NVMe Controllers 00:14:06.154 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:06.154 Controller IO queue size 128, less than required. 00:14:06.154 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:06.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:06.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:06.154 Initialization complete. Launching workers. 00:14:06.154 ======================================================== 00:14:06.154 Latency(us) 00:14:06.154 Device Information : IOPS MiB/s Average min max 00:14:06.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004090.06 1000337.54 1014206.38 00:14:06.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005193.97 1000391.52 1041562.08 00:14:06.154 ======================================================== 00:14:06.154 Total : 256.00 0.12 1004642.02 1000337.54 1041562.08 00:14:06.154 00:14:06.411 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:06.411 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 900844 00:14:06.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (900844) - No such process 00:14:06.411 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 900844 00:14:06.411 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:06.411 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:06.411 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.411 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:06.411 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.411 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:06.411 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.411 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.411 rmmod nvme_tcp 00:14:06.411 rmmod nvme_fabrics 00:14:06.411 rmmod nvme_keyring 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 900412 ']' 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 900412 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 900412 ']' 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 900412 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 900412 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 900412' 00:14:06.669 killing process with pid 900412 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 900412 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 900412 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.669 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:06.927 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:06.927 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.927 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:06.927 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.927 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.927 17:49:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.826 17:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:08.826 00:14:08.826 real 0m12.137s 00:14:08.826 user 0m27.557s 00:14:08.826 sys 0m3.062s 00:14:08.826 17:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:08.826 17:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:08.826 ************************************ 00:14:08.826 END TEST nvmf_delete_subsystem 00:14:08.826 ************************************ 00:14:08.826 17:49:43 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:08.826 17:49:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:08.826 17:49:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:08.826 17:49:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:08.826 ************************************ 00:14:08.826 START TEST nvmf_ns_masking 00:14:08.826 ************************************ 00:14:08.826 17:49:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:08.826 * Looking for test storage... 00:14:08.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.826 17:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.826 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:08.826 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.826 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.826 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.826 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.826 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.826 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.826 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.826 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.826 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.826 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.083 17:49:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=fe42b003-e06f-46ed-8a47-6d91d752145a 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.084 17:49:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:10.986 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:10.986 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:10.986 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:10.986 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.986 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:10.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:14:10.987 00:14:10.987 --- 10.0.0.2 ping statistics --- 00:14:10.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.987 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:14:10.987 00:14:10.987 --- 10.0.0.1 ping statistics --- 00:14:10.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.987 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=903186 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 903186 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 903186 ']' 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:10.987 17:49:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.987 [2024-07-20 17:49:45.741488] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:10.987 [2024-07-20 17:49:45.741558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.987 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.275 [2024-07-20 17:49:45.811088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.275 [2024-07-20 17:49:45.900604] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.275 [2024-07-20 17:49:45.900656] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.275 [2024-07-20 17:49:45.900669] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.275 [2024-07-20 17:49:45.900680] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.275 [2024-07-20 17:49:45.900689] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.275 [2024-07-20 17:49:45.900777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.275 [2024-07-20 17:49:45.900910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.275 [2024-07-20 17:49:45.900934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.275 [2024-07-20 17:49:45.900937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.275 17:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:11.275 17:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:11.275 17:49:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.275 17:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.275 17:49:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.275 17:49:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.275 17:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:11.532 [2024-07-20 17:49:46.325553] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.788 17:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:11.788 17:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:11.788 17:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:12.045 Malloc1 00:14:12.045 17:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:12.302 Malloc2 00:14:12.302 17:49:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:12.559 17:49:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:12.816 17:49:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.074 [2024-07-20 17:49:47.637645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.074 17:49:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:13.074 17:49:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fe42b003-e06f-46ed-8a47-6d91d752145a -a 10.0.0.2 -s 4420 -i 4 00:14:13.074 17:49:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:13.074 17:49:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:13.074 17:49:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:13.074 17:49:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:13.074 17:49:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:15.615 17:49:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:15.616 [ 0]:0x1 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4000cf8eda6a49e9a5ffd994a4468a5d 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4000cf8eda6a49e9a5ffd994a4468a5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.616 17:49:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:15.616 [ 0]:0x1 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4000cf8eda6a49e9a5ffd994a4468a5d 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4000cf8eda6a49e9a5ffd994a4468a5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:15.616 [ 1]:0x2 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7b8a3f5172bc4f3f989800de939f7ade 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7b8a3f5172bc4f3f989800de939f7ade != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.616 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.873 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:16.130 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:16.130 17:49:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fe42b003-e06f-46ed-8a47-6d91d752145a -a 10.0.0.2 -s 4420 -i 4 00:14:16.387 17:49:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:16.387 17:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:16.387 17:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.387 17:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:16.387 17:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:16.387 17:49:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:18.312 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:18.312 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:18.312 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.312 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:18.312 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.312 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:18.312 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:18.312 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:18.568 [ 0]:0x2 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7b8a3f5172bc4f3f989800de939f7ade 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7b8a3f5172bc4f3f989800de939f7ade != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.568 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:18.824 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:18.824 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:18.824 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:18.824 [ 0]:0x1 00:14:18.824 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.824 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:18.824 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4000cf8eda6a49e9a5ffd994a4468a5d 00:14:18.824 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4000cf8eda6a49e9a5ffd994a4468a5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.824 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:18.824 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:18.824 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:18.824 [ 1]:0x2 00:14:18.824 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.824 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:19.081 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7b8a3f5172bc4f3f989800de939f7ade 00:14:19.081 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7b8a3f5172bc4f3f989800de939f7ade != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.081 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:19.337 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:19.337 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:19.337 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:19.337 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:19.337 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:19.337 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:19.337 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:19.338 [ 0]:0x2 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.338 17:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:19.338 17:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7b8a3f5172bc4f3f989800de939f7ade 00:14:19.338 17:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7b8a3f5172bc4f3f989800de939f7ade != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.338 17:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:19.338 17:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.338 17:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:19.594 17:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:19.594 17:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fe42b003-e06f-46ed-8a47-6d91d752145a -a 10.0.0.2 -s 4420 -i 4 00:14:19.851 17:49:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:19.851 17:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:19.851 17:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.851 17:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:19.851 17:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:19.851 17:49:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:21.745 [ 0]:0x1 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.745 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:22.002 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4000cf8eda6a49e9a5ffd994a4468a5d 00:14:22.002 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4000cf8eda6a49e9a5ffd994a4468a5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.002 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:22.002 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:22.002 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:22.002 [ 1]:0x2 00:14:22.002 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.002 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:22.002 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7b8a3f5172bc4f3f989800de939f7ade 00:14:22.002 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7b8a3f5172bc4f3f989800de939f7ade != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.002 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:22.260 [ 0]:0x2 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.260 17:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7b8a3f5172bc4f3f989800de939f7ade 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7b8a3f5172bc4f3f989800de939f7ade != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:22.260 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:22.517 [2024-07-20 17:49:57.249257] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:22.517 request: 00:14:22.517 { 00:14:22.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.517 "nsid": 2, 00:14:22.517 "host": "nqn.2016-06.io.spdk:host1", 00:14:22.517 "method": "nvmf_ns_remove_host", 00:14:22.517 "req_id": 1 00:14:22.517 } 00:14:22.517 Got JSON-RPC error response 00:14:22.517 response: 00:14:22.517 { 00:14:22.517 "code": -32602, 00:14:22.517 "message": "Invalid parameters" 00:14:22.517 } 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.517 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:22.774 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:22.775 [ 0]:0x2 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7b8a3f5172bc4f3f989800de939f7ade 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7b8a3f5172bc4f3f989800de939f7ade != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.775 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:23.032 rmmod nvme_tcp 00:14:23.032 rmmod nvme_fabrics 00:14:23.032 rmmod nvme_keyring 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 903186 ']' 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 903186 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 903186 ']' 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 903186 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:23.032 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 903186 00:14:23.290 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:23.290 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:23.290 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 903186' 00:14:23.290 killing process with pid 903186 00:14:23.290 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 903186 00:14:23.290 17:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 903186 00:14:23.548 17:49:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:23.548 17:49:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:23.548 17:49:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:23.548 17:49:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:23.548 17:49:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:23.548 17:49:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.548 17:49:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.548 17:49:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.449 17:50:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:25.449 00:14:25.449 real 0m16.619s 00:14:25.449 user 0m52.056s 00:14:25.449 sys 0m3.698s 00:14:25.449 17:50:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:25.449 17:50:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:25.449 ************************************ 00:14:25.449 END TEST nvmf_ns_masking 00:14:25.449 ************************************ 00:14:25.449 17:50:00 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:25.449 17:50:00 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:25.449 17:50:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:25.449 17:50:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:25.449 17:50:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:25.449 ************************************ 00:14:25.449 START TEST nvmf_nvme_cli 00:14:25.449 ************************************ 00:14:25.449 17:50:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:25.706 * Looking for test storage... 00:14:25.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.706 17:50:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.706 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:25.706 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.706 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.706 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.706 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.706 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.706 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.706 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.706 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.706 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.706 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:25.707 17:50:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.601 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:27.602 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:27.602 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:27.602 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:27.602 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:27.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:14:27.602 00:14:27.602 --- 10.0.0.2 ping statistics --- 00:14:27.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.602 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:14:27.602 00:14:27.602 --- 10.0.0.1 ping statistics --- 00:14:27.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.602 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:27.602 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:27.860 17:50:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:27.860 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.860 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:27.860 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.860 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=906733 00:14:27.860 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:27.860 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 906733 00:14:27.860 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 906733 ']' 00:14:27.860 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.860 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:27.860 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.860 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:27.860 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.860 [2024-07-20 17:50:02.464317] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:27.860 [2024-07-20 17:50:02.464411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.860 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.860 [2024-07-20 17:50:02.534138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.860 [2024-07-20 17:50:02.624492] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.860 [2024-07-20 17:50:02.624555] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.860 [2024-07-20 17:50:02.624589] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.860 [2024-07-20 17:50:02.624605] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.860 [2024-07-20 17:50:02.624617] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.860 [2024-07-20 17:50:02.624705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.860 [2024-07-20 17:50:02.624760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.860 [2024-07-20 17:50:02.624890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:27.860 [2024-07-20 17:50:02.624894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.118 [2024-07-20 17:50:02.774694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.118 Malloc0 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.118 Malloc1 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.118 [2024-07-20 17:50:02.857049] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.118 17:50:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:28.378 00:14:28.378 Discovery Log Number of Records 2, Generation counter 2 00:14:28.378 =====Discovery Log Entry 0====== 00:14:28.378 trtype: tcp 00:14:28.378 adrfam: ipv4 00:14:28.378 subtype: current discovery subsystem 00:14:28.378 treq: not required 00:14:28.378 portid: 0 00:14:28.378 trsvcid: 4420 00:14:28.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:28.378 traddr: 10.0.0.2 00:14:28.378 eflags: explicit discovery connections, duplicate discovery information 00:14:28.378 sectype: none 00:14:28.378 =====Discovery Log Entry 1====== 00:14:28.378 trtype: tcp 00:14:28.378 adrfam: ipv4 00:14:28.378 subtype: nvme subsystem 00:14:28.378 treq: not required 00:14:28.378 portid: 0 00:14:28.378 trsvcid: 4420 00:14:28.378 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:28.378 traddr: 10.0.0.2 00:14:28.378 eflags: none 00:14:28.378 sectype: none 00:14:28.378 17:50:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:28.378 17:50:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:28.378 17:50:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:28.378 17:50:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:28.378 17:50:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:28.378 17:50:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:28.378 17:50:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:28.378 17:50:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:28.378 17:50:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:28.378 17:50:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:28.378 17:50:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:28.945 17:50:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:28.945 17:50:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:28.945 17:50:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.945 17:50:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:28.945 17:50:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:28.945 17:50:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:30.840 17:50:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:30.840 17:50:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:30.840 17:50:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.841 17:50:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:30.841 17:50:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.841 17:50:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:30.841 17:50:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:30.841 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:30.841 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:30.841 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:31.098 /dev/nvme0n1 ]] 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.098 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:31.355 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:31.355 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.355 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:31.355 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.355 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:31.355 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:31.355 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.355 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:31.355 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:31.355 17:50:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.355 17:50:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:31.355 17:50:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:31.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.355 17:50:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:31.355 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:31.355 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:31.355 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:31.613 rmmod nvme_tcp 00:14:31.613 rmmod nvme_fabrics 00:14:31.613 rmmod nvme_keyring 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 906733 ']' 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 906733 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 906733 ']' 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 906733 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 906733 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 906733' 00:14:31.613 killing process with pid 906733 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 906733 00:14:31.613 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 906733 00:14:31.891 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.891 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.891 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.891 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.891 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.891 17:50:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.891 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.891 17:50:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.790 17:50:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:33.790 00:14:33.790 real 0m8.324s 00:14:33.790 user 0m15.854s 00:14:33.790 sys 0m2.142s 00:14:33.790 17:50:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:33.790 17:50:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.790 ************************************ 00:14:33.790 END TEST nvmf_nvme_cli 00:14:33.790 ************************************ 00:14:33.790 17:50:08 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:33.790 17:50:08 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:33.790 17:50:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:33.790 17:50:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:33.790 17:50:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:34.048 ************************************ 00:14:34.048 START TEST nvmf_vfio_user 00:14:34.048 ************************************ 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:34.048 * Looking for test storage... 00:14:34.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=907654 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 907654' 00:14:34.048 Process pid: 907654 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 907654 00:14:34.048 17:50:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 907654 ']' 00:14:34.049 17:50:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.049 17:50:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:34.049 17:50:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.049 17:50:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:34.049 17:50:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:34.049 [2024-07-20 17:50:08.719653] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:34.049 [2024-07-20 17:50:08.719742] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.049 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.049 [2024-07-20 17:50:08.777627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:34.306 [2024-07-20 17:50:08.863366] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.306 [2024-07-20 17:50:08.863418] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.306 [2024-07-20 17:50:08.863440] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.306 [2024-07-20 17:50:08.863451] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.306 [2024-07-20 17:50:08.863461] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.306 [2024-07-20 17:50:08.863597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.306 [2024-07-20 17:50:08.863662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.306 [2024-07-20 17:50:08.863729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:34.306 [2024-07-20 17:50:08.863731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.306 17:50:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:34.306 17:50:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:14:34.306 17:50:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:35.237 17:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:35.495 17:50:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:35.751 17:50:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:35.751 17:50:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:35.751 17:50:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:35.751 17:50:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:36.009 Malloc1 00:14:36.009 17:50:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:36.266 17:50:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:36.522 17:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:36.779 17:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:36.779 17:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:36.779 17:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:37.037 Malloc2 00:14:37.037 17:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:37.294 17:50:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:37.551 17:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:37.809 17:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:37.809 17:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:37.809 17:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:37.809 17:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:37.809 17:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:37.809 17:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:37.809 [2024-07-20 17:50:12.366852] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:37.809 [2024-07-20 17:50:12.366894] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908074 ] 00:14:37.809 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.809 [2024-07-20 17:50:12.399959] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:37.809 [2024-07-20 17:50:12.408356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:37.809 [2024-07-20 17:50:12.408384] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f005b7ee000 00:14:37.809 [2024-07-20 17:50:12.409353] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.809 [2024-07-20 17:50:12.410354] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.809 [2024-07-20 17:50:12.411366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.809 [2024-07-20 17:50:12.412361] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:37.809 [2024-07-20 17:50:12.413366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:37.809 [2024-07-20 17:50:12.414380] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.809 [2024-07-20 17:50:12.415375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:37.809 [2024-07-20 17:50:12.416383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.809 [2024-07-20 17:50:12.417392] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:37.809 [2024-07-20 17:50:12.417411] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f005a5a4000 00:14:37.809 [2024-07-20 17:50:12.418550] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:37.809 [2024-07-20 17:50:12.434482] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:37.809 [2024-07-20 17:50:12.434528] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:37.809 [2024-07-20 17:50:12.439530] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:37.809 [2024-07-20 17:50:12.439580] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:37.809 [2024-07-20 17:50:12.439667] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:37.809 [2024-07-20 17:50:12.439696] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:37.809 [2024-07-20 17:50:12.439706] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:37.809 [2024-07-20 17:50:12.440526] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:37.809 [2024-07-20 17:50:12.440549] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:37.809 [2024-07-20 17:50:12.440563] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:37.809 [2024-07-20 17:50:12.441527] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:37.809 [2024-07-20 17:50:12.441545] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:37.810 [2024-07-20 17:50:12.441558] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:37.810 [2024-07-20 17:50:12.442530] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:37.810 [2024-07-20 17:50:12.442547] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:37.810 [2024-07-20 17:50:12.443536] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:37.810 [2024-07-20 17:50:12.443555] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:37.810 [2024-07-20 17:50:12.443564] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:37.810 [2024-07-20 17:50:12.443575] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:37.810 [2024-07-20 17:50:12.443685] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:37.810 [2024-07-20 17:50:12.443693] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:37.810 [2024-07-20 17:50:12.443701] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:37.810 [2024-07-20 17:50:12.444551] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:37.810 [2024-07-20 17:50:12.445553] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:37.810 [2024-07-20 17:50:12.446558] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:37.810 [2024-07-20 17:50:12.447550] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:37.810 [2024-07-20 17:50:12.447677] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:37.810 [2024-07-20 17:50:12.448573] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:37.810 [2024-07-20 17:50:12.448591] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:37.810 [2024-07-20 17:50:12.448600] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.448624] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:37.810 [2024-07-20 17:50:12.448638] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.448668] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:37.810 [2024-07-20 17:50:12.448678] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.810 [2024-07-20 17:50:12.448697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.810 [2024-07-20 17:50:12.448764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:37.810 [2024-07-20 17:50:12.448787] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:37.810 [2024-07-20 17:50:12.448819] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:37.810 [2024-07-20 17:50:12.448828] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:37.810 [2024-07-20 17:50:12.448835] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:37.810 [2024-07-20 17:50:12.448854] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:37.810 [2024-07-20 17:50:12.448863] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:37.810 [2024-07-20 17:50:12.448872] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.448885] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.448901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:37.810 [2024-07-20 17:50:12.448922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:37.810 [2024-07-20 17:50:12.448954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.810 [2024-07-20 17:50:12.448968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.810 [2024-07-20 17:50:12.448980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.810 [2024-07-20 17:50:12.448992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.810 [2024-07-20 17:50:12.449000] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449016] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449030] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:37.810 [2024-07-20 17:50:12.449042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:37.810 [2024-07-20 17:50:12.449054] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:37.810 [2024-07-20 17:50:12.449062] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449073] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449087] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449100] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:37.810 [2024-07-20 17:50:12.449135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:37.810 [2024-07-20 17:50:12.449201] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449217] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449230] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:37.810 [2024-07-20 17:50:12.449238] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:37.810 [2024-07-20 17:50:12.449247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:37.810 [2024-07-20 17:50:12.449264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:37.810 [2024-07-20 17:50:12.449280] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:37.810 [2024-07-20 17:50:12.449298] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449312] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449324] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:37.810 [2024-07-20 17:50:12.449332] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.810 [2024-07-20 17:50:12.449341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.810 [2024-07-20 17:50:12.449359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:37.810 [2024-07-20 17:50:12.449379] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449393] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449405] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:37.810 [2024-07-20 17:50:12.449413] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.810 [2024-07-20 17:50:12.449422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.810 [2024-07-20 17:50:12.449436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:37.810 [2024-07-20 17:50:12.449449] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449460] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449473] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449484] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449492] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449500] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:37.810 [2024-07-20 17:50:12.449508] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:37.810 [2024-07-20 17:50:12.449516] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:37.810 [2024-07-20 17:50:12.449546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:37.810 [2024-07-20 17:50:12.449564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:37.810 [2024-07-20 17:50:12.449582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:37.810 [2024-07-20 17:50:12.449598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:37.810 [2024-07-20 17:50:12.449614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:37.810 [2024-07-20 17:50:12.449625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:37.810 [2024-07-20 17:50:12.449641] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:37.810 [2024-07-20 17:50:12.449655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:37.810 [2024-07-20 17:50:12.449673] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:37.810 [2024-07-20 17:50:12.449681] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:37.811 [2024-07-20 17:50:12.449687] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:37.811 [2024-07-20 17:50:12.449693] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:37.811 [2024-07-20 17:50:12.449703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:37.811 [2024-07-20 17:50:12.449713] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:37.811 [2024-07-20 17:50:12.449721] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:37.811 [2024-07-20 17:50:12.449730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:37.811 [2024-07-20 17:50:12.449740] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:37.811 [2024-07-20 17:50:12.449748] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.811 [2024-07-20 17:50:12.449756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.811 [2024-07-20 17:50:12.449768] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:37.811 [2024-07-20 17:50:12.449791] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:37.811 [2024-07-20 17:50:12.449808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:37.811 [2024-07-20 17:50:12.449820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:37.811 [2024-07-20 17:50:12.449841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:37.811 [2024-07-20 17:50:12.449860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:37.811 [2024-07-20 17:50:12.449876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:37.811 ===================================================== 00:14:37.811 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:37.811 ===================================================== 00:14:37.811 Controller Capabilities/Features 00:14:37.811 ================================ 00:14:37.811 Vendor ID: 4e58 00:14:37.811 Subsystem Vendor ID: 4e58 00:14:37.811 Serial Number: SPDK1 00:14:37.811 Model Number: SPDK bdev Controller 00:14:37.811 Firmware Version: 24.05.1 00:14:37.811 Recommended Arb Burst: 6 00:14:37.811 IEEE OUI Identifier: 8d 6b 50 00:14:37.811 Multi-path I/O 00:14:37.811 May have multiple subsystem ports: Yes 00:14:37.811 May have multiple controllers: Yes 00:14:37.811 Associated with SR-IOV VF: No 00:14:37.811 Max Data Transfer Size: 131072 00:14:37.811 Max Number of Namespaces: 32 00:14:37.811 Max Number of I/O Queues: 127 00:14:37.811 NVMe Specification Version (VS): 1.3 00:14:37.811 NVMe Specification Version (Identify): 1.3 00:14:37.811 Maximum Queue Entries: 256 00:14:37.811 Contiguous Queues Required: Yes 00:14:37.811 Arbitration Mechanisms Supported 00:14:37.811 Weighted Round Robin: Not Supported 00:14:37.811 Vendor Specific: Not Supported 00:14:37.811 Reset Timeout: 15000 ms 00:14:37.811 Doorbell Stride: 4 bytes 00:14:37.811 NVM Subsystem Reset: Not Supported 00:14:37.811 Command Sets Supported 00:14:37.811 NVM Command Set: Supported 00:14:37.811 Boot Partition: Not Supported 00:14:37.811 Memory Page Size Minimum: 4096 bytes 00:14:37.811 Memory Page Size Maximum: 4096 bytes 00:14:37.811 Persistent Memory Region: Not Supported 00:14:37.811 Optional Asynchronous Events Supported 00:14:37.811 Namespace Attribute Notices: Supported 00:14:37.811 Firmware Activation Notices: Not Supported 00:14:37.811 ANA Change Notices: Not Supported 00:14:37.811 PLE Aggregate Log Change Notices: Not Supported 00:14:37.811 LBA Status Info Alert Notices: Not Supported 00:14:37.811 EGE Aggregate Log Change Notices: Not Supported 00:14:37.811 Normal NVM Subsystem Shutdown event: Not Supported 00:14:37.811 Zone Descriptor Change Notices: Not Supported 00:14:37.811 Discovery Log Change Notices: Not Supported 00:14:37.811 Controller Attributes 00:14:37.811 128-bit Host Identifier: Supported 00:14:37.811 Non-Operational Permissive Mode: Not Supported 00:14:37.811 NVM Sets: Not Supported 00:14:37.811 Read Recovery Levels: Not Supported 00:14:37.811 Endurance Groups: Not Supported 00:14:37.811 Predictable Latency Mode: Not Supported 00:14:37.811 Traffic Based Keep ALive: Not Supported 00:14:37.811 Namespace Granularity: Not Supported 00:14:37.811 SQ Associations: Not Supported 00:14:37.811 UUID List: Not Supported 00:14:37.811 Multi-Domain Subsystem: Not Supported 00:14:37.811 Fixed Capacity Management: Not Supported 00:14:37.811 Variable Capacity Management: Not Supported 00:14:37.811 Delete Endurance Group: Not Supported 00:14:37.811 Delete NVM Set: Not Supported 00:14:37.811 Extended LBA Formats Supported: Not Supported 00:14:37.811 Flexible Data Placement Supported: Not Supported 00:14:37.811 00:14:37.811 Controller Memory Buffer Support 00:14:37.811 ================================ 00:14:37.811 Supported: No 00:14:37.811 00:14:37.811 Persistent Memory Region Support 00:14:37.811 ================================ 00:14:37.811 Supported: No 00:14:37.811 00:14:37.811 Admin Command Set Attributes 00:14:37.811 ============================ 00:14:37.811 Security Send/Receive: Not Supported 00:14:37.811 Format NVM: Not Supported 00:14:37.811 Firmware Activate/Download: Not Supported 00:14:37.811 Namespace Management: Not Supported 00:14:37.811 Device Self-Test: Not Supported 00:14:37.811 Directives: Not Supported 00:14:37.811 NVMe-MI: Not Supported 00:14:37.811 Virtualization Management: Not Supported 00:14:37.811 Doorbell Buffer Config: Not Supported 00:14:37.811 Get LBA Status Capability: Not Supported 00:14:37.811 Command & Feature Lockdown Capability: Not Supported 00:14:37.811 Abort Command Limit: 4 00:14:37.811 Async Event Request Limit: 4 00:14:37.811 Number of Firmware Slots: N/A 00:14:37.811 Firmware Slot 1 Read-Only: N/A 00:14:37.811 Firmware Activation Without Reset: N/A 00:14:37.811 Multiple Update Detection Support: N/A 00:14:37.811 Firmware Update Granularity: No Information Provided 00:14:37.811 Per-Namespace SMART Log: No 00:14:37.811 Asymmetric Namespace Access Log Page: Not Supported 00:14:37.811 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:37.811 Command Effects Log Page: Supported 00:14:37.811 Get Log Page Extended Data: Supported 00:14:37.811 Telemetry Log Pages: Not Supported 00:14:37.811 Persistent Event Log Pages: Not Supported 00:14:37.811 Supported Log Pages Log Page: May Support 00:14:37.811 Commands Supported & Effects Log Page: Not Supported 00:14:37.811 Feature Identifiers & Effects Log Page:May Support 00:14:37.811 NVMe-MI Commands & Effects Log Page: May Support 00:14:37.811 Data Area 4 for Telemetry Log: Not Supported 00:14:37.811 Error Log Page Entries Supported: 128 00:14:37.811 Keep Alive: Supported 00:14:37.811 Keep Alive Granularity: 10000 ms 00:14:37.811 00:14:37.811 NVM Command Set Attributes 00:14:37.811 ========================== 00:14:37.811 Submission Queue Entry Size 00:14:37.811 Max: 64 00:14:37.811 Min: 64 00:14:37.811 Completion Queue Entry Size 00:14:37.811 Max: 16 00:14:37.811 Min: 16 00:14:37.811 Number of Namespaces: 32 00:14:37.811 Compare Command: Supported 00:14:37.811 Write Uncorrectable Command: Not Supported 00:14:37.811 Dataset Management Command: Supported 00:14:37.811 Write Zeroes Command: Supported 00:14:37.811 Set Features Save Field: Not Supported 00:14:37.811 Reservations: Not Supported 00:14:37.811 Timestamp: Not Supported 00:14:37.811 Copy: Supported 00:14:37.811 Volatile Write Cache: Present 00:14:37.811 Atomic Write Unit (Normal): 1 00:14:37.811 Atomic Write Unit (PFail): 1 00:14:37.811 Atomic Compare & Write Unit: 1 00:14:37.811 Fused Compare & Write: Supported 00:14:37.811 Scatter-Gather List 00:14:37.811 SGL Command Set: Supported (Dword aligned) 00:14:37.811 SGL Keyed: Not Supported 00:14:37.811 SGL Bit Bucket Descriptor: Not Supported 00:14:37.811 SGL Metadata Pointer: Not Supported 00:14:37.811 Oversized SGL: Not Supported 00:14:37.811 SGL Metadata Address: Not Supported 00:14:37.811 SGL Offset: Not Supported 00:14:37.811 Transport SGL Data Block: Not Supported 00:14:37.811 Replay Protected Memory Block: Not Supported 00:14:37.811 00:14:37.811 Firmware Slot Information 00:14:37.811 ========================= 00:14:37.811 Active slot: 1 00:14:37.811 Slot 1 Firmware Revision: 24.05.1 00:14:37.811 00:14:37.811 00:14:37.811 Commands Supported and Effects 00:14:37.811 ============================== 00:14:37.811 Admin Commands 00:14:37.811 -------------- 00:14:37.811 Get Log Page (02h): Supported 00:14:37.811 Identify (06h): Supported 00:14:37.811 Abort (08h): Supported 00:14:37.811 Set Features (09h): Supported 00:14:37.811 Get Features (0Ah): Supported 00:14:37.811 Asynchronous Event Request (0Ch): Supported 00:14:37.811 Keep Alive (18h): Supported 00:14:37.811 I/O Commands 00:14:37.811 ------------ 00:14:37.811 Flush (00h): Supported LBA-Change 00:14:37.811 Write (01h): Supported LBA-Change 00:14:37.811 Read (02h): Supported 00:14:37.811 Compare (05h): Supported 00:14:37.811 Write Zeroes (08h): Supported LBA-Change 00:14:37.811 Dataset Management (09h): Supported LBA-Change 00:14:37.811 Copy (19h): Supported LBA-Change 00:14:37.811 Unknown (79h): Supported LBA-Change 00:14:37.811 Unknown (7Ah): Supported 00:14:37.811 00:14:37.812 Error Log 00:14:37.812 ========= 00:14:37.812 00:14:37.812 Arbitration 00:14:37.812 =========== 00:14:37.812 Arbitration Burst: 1 00:14:37.812 00:14:37.812 Power Management 00:14:37.812 ================ 00:14:37.812 Number of Power States: 1 00:14:37.812 Current Power State: Power State #0 00:14:37.812 Power State #0: 00:14:37.812 Max Power: 0.00 W 00:14:37.812 Non-Operational State: Operational 00:14:37.812 Entry Latency: Not Reported 00:14:37.812 Exit Latency: Not Reported 00:14:37.812 Relative Read Throughput: 0 00:14:37.812 Relative Read Latency: 0 00:14:37.812 Relative Write Throughput: 0 00:14:37.812 Relative Write Latency: 0 00:14:37.812 Idle Power: Not Reported 00:14:37.812 Active Power: Not Reported 00:14:37.812 Non-Operational Permissive Mode: Not Supported 00:14:37.812 00:14:37.812 Health Information 00:14:37.812 ================== 00:14:37.812 Critical Warnings: 00:14:37.812 Available Spare Space: OK 00:14:37.812 Temperature: OK 00:14:37.812 Device Reliability: OK 00:14:37.812 Read Only: No 00:14:37.812 Volatile Memory Backup: OK 00:14:37.812 Current Temperature: 0 Kelvin[2024-07-20 17:50:12.449999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:37.812 [2024-07-20 17:50:12.450016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:37.812 [2024-07-20 17:50:12.450051] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:37.812 [2024-07-20 17:50:12.450068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.812 [2024-07-20 17:50:12.450079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.812 [2024-07-20 17:50:12.450107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.812 [2024-07-20 17:50:12.450117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.812 [2024-07-20 17:50:12.453804] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:37.812 [2024-07-20 17:50:12.453826] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:37.812 [2024-07-20 17:50:12.454600] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:37.812 [2024-07-20 17:50:12.454670] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:37.812 [2024-07-20 17:50:12.454683] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:37.812 [2024-07-20 17:50:12.455605] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:37.812 [2024-07-20 17:50:12.455626] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:37.812 [2024-07-20 17:50:12.455679] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:37.812 [2024-07-20 17:50:12.457644] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:37.812 (-273 Celsius) 00:14:37.812 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:37.812 Available Spare: 0% 00:14:37.812 Available Spare Threshold: 0% 00:14:37.812 Life Percentage Used: 0% 00:14:37.812 Data Units Read: 0 00:14:37.812 Data Units Written: 0 00:14:37.812 Host Read Commands: 0 00:14:37.812 Host Write Commands: 0 00:14:37.812 Controller Busy Time: 0 minutes 00:14:37.812 Power Cycles: 0 00:14:37.812 Power On Hours: 0 hours 00:14:37.812 Unsafe Shutdowns: 0 00:14:37.812 Unrecoverable Media Errors: 0 00:14:37.812 Lifetime Error Log Entries: 0 00:14:37.812 Warning Temperature Time: 0 minutes 00:14:37.812 Critical Temperature Time: 0 minutes 00:14:37.812 00:14:37.812 Number of Queues 00:14:37.812 ================ 00:14:37.812 Number of I/O Submission Queues: 127 00:14:37.812 Number of I/O Completion Queues: 127 00:14:37.812 00:14:37.812 Active Namespaces 00:14:37.812 ================= 00:14:37.812 Namespace ID:1 00:14:37.812 Error Recovery Timeout: Unlimited 00:14:37.812 Command Set Identifier: NVM (00h) 00:14:37.812 Deallocate: Supported 00:14:37.812 Deallocated/Unwritten Error: Not Supported 00:14:37.812 Deallocated Read Value: Unknown 00:14:37.812 Deallocate in Write Zeroes: Not Supported 00:14:37.812 Deallocated Guard Field: 0xFFFF 00:14:37.812 Flush: Supported 00:14:37.812 Reservation: Supported 00:14:37.812 Namespace Sharing Capabilities: Multiple Controllers 00:14:37.812 Size (in LBAs): 131072 (0GiB) 00:14:37.812 Capacity (in LBAs): 131072 (0GiB) 00:14:37.812 Utilization (in LBAs): 131072 (0GiB) 00:14:37.812 NGUID: BAB240E860C846DA88E41C5AED41472A 00:14:37.812 UUID: bab240e8-60c8-46da-88e4-1c5aed41472a 00:14:37.812 Thin Provisioning: Not Supported 00:14:37.812 Per-NS Atomic Units: Yes 00:14:37.812 Atomic Boundary Size (Normal): 0 00:14:37.812 Atomic Boundary Size (PFail): 0 00:14:37.812 Atomic Boundary Offset: 0 00:14:37.812 Maximum Single Source Range Length: 65535 00:14:37.812 Maximum Copy Length: 65535 00:14:37.812 Maximum Source Range Count: 1 00:14:37.812 NGUID/EUI64 Never Reused: No 00:14:37.812 Namespace Write Protected: No 00:14:37.812 Number of LBA Formats: 1 00:14:37.812 Current LBA Format: LBA Format #00 00:14:37.812 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:37.812 00:14:37.812 17:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:37.812 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.069 [2024-07-20 17:50:12.687621] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:43.373 Initializing NVMe Controllers 00:14:43.373 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:43.373 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:43.373 Initialization complete. Launching workers. 00:14:43.373 ======================================================== 00:14:43.373 Latency(us) 00:14:43.373 Device Information : IOPS MiB/s Average min max 00:14:43.373 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35253.00 137.71 3630.76 1154.20 9747.31 00:14:43.374 ======================================================== 00:14:43.374 Total : 35253.00 137.71 3630.76 1154.20 9747.31 00:14:43.374 00:14:43.374 [2024-07-20 17:50:17.710359] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:43.374 17:50:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:43.374 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.374 [2024-07-20 17:50:17.942476] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:48.649 Initializing NVMe Controllers 00:14:48.650 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:48.650 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:48.650 Initialization complete. Launching workers. 00:14:48.650 ======================================================== 00:14:48.650 Latency(us) 00:14:48.650 Device Information : IOPS MiB/s Average min max 00:14:48.650 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.92 62.70 7979.90 6965.65 11961.20 00:14:48.650 ======================================================== 00:14:48.650 Total : 16050.92 62.70 7979.90 6965.65 11961.20 00:14:48.650 00:14:48.650 [2024-07-20 17:50:22.984975] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:48.650 17:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:48.650 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.650 [2024-07-20 17:50:23.198014] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:53.906 [2024-07-20 17:50:28.277130] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:53.906 Initializing NVMe Controllers 00:14:53.906 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:53.906 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:53.906 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:53.906 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:53.906 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:53.906 Initialization complete. Launching workers. 00:14:53.906 Starting thread on core 2 00:14:53.906 Starting thread on core 3 00:14:53.906 Starting thread on core 1 00:14:53.906 17:50:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:53.906 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.906 [2024-07-20 17:50:28.567266] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.184 [2024-07-20 17:50:31.638726] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.184 Initializing NVMe Controllers 00:14:57.184 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.184 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.184 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:57.184 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:57.184 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:57.184 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:57.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:57.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:57.184 Initialization complete. Launching workers. 00:14:57.184 Starting thread on core 1 with urgent priority queue 00:14:57.184 Starting thread on core 2 with urgent priority queue 00:14:57.184 Starting thread on core 3 with urgent priority queue 00:14:57.184 Starting thread on core 0 with urgent priority queue 00:14:57.184 SPDK bdev Controller (SPDK1 ) core 0: 5760.33 IO/s 17.36 secs/100000 ios 00:14:57.184 SPDK bdev Controller (SPDK1 ) core 1: 6540.67 IO/s 15.29 secs/100000 ios 00:14:57.184 SPDK bdev Controller (SPDK1 ) core 2: 5985.00 IO/s 16.71 secs/100000 ios 00:14:57.184 SPDK bdev Controller (SPDK1 ) core 3: 5667.67 IO/s 17.64 secs/100000 ios 00:14:57.184 ======================================================== 00:14:57.184 00:14:57.184 17:50:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:57.184 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.184 [2024-07-20 17:50:31.928314] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.184 Initializing NVMe Controllers 00:14:57.184 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.184 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.184 Namespace ID: 1 size: 0GB 00:14:57.184 Initialization complete. 00:14:57.184 INFO: using host memory buffer for IO 00:14:57.184 Hello world! 00:14:57.184 [2024-07-20 17:50:31.962882] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.442 17:50:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:57.442 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.699 [2024-07-20 17:50:32.252226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:58.633 Initializing NVMe Controllers 00:14:58.633 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:58.633 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:58.633 Initialization complete. Launching workers. 00:14:58.633 submit (in ns) avg, min, max = 8817.1, 3495.6, 4016123.3 00:14:58.633 complete (in ns) avg, min, max = 24987.5, 2061.1, 5995732.2 00:14:58.633 00:14:58.633 Submit histogram 00:14:58.633 ================ 00:14:58.633 Range in us Cumulative Count 00:14:58.633 3.484 - 3.508: 0.1714% ( 23) 00:14:58.633 3.508 - 3.532: 0.9688% ( 107) 00:14:58.633 3.532 - 3.556: 2.6606% ( 227) 00:14:58.633 3.556 - 3.579: 7.4080% ( 637) 00:14:58.633 3.579 - 3.603: 14.1005% ( 898) 00:14:58.633 3.603 - 3.627: 22.8052% ( 1168) 00:14:58.633 3.627 - 3.650: 31.1447% ( 1119) 00:14:58.633 3.650 - 3.674: 39.7973% ( 1161) 00:14:58.633 3.674 - 3.698: 47.2276% ( 997) 00:14:58.633 3.698 - 3.721: 54.8293% ( 1020) 00:14:58.633 3.721 - 3.745: 59.8152% ( 669) 00:14:58.633 3.745 - 3.769: 63.7651% ( 530) 00:14:58.633 3.769 - 3.793: 67.1188% ( 450) 00:14:58.633 3.793 - 3.816: 70.8600% ( 502) 00:14:58.633 3.816 - 3.840: 74.3777% ( 472) 00:14:58.633 3.840 - 3.864: 78.2009% ( 513) 00:14:58.633 3.864 - 3.887: 81.7111% ( 471) 00:14:58.633 3.887 - 3.911: 84.4835% ( 372) 00:14:58.633 3.911 - 3.935: 86.9355% ( 329) 00:14:58.633 3.935 - 3.959: 88.8508% ( 257) 00:14:58.633 3.959 - 3.982: 90.4978% ( 221) 00:14:58.633 3.982 - 4.006: 91.9586% ( 196) 00:14:58.633 4.006 - 4.030: 93.0243% ( 143) 00:14:58.633 4.030 - 4.053: 94.0304% ( 135) 00:14:58.633 4.053 - 4.077: 94.8949% ( 116) 00:14:58.633 4.077 - 4.101: 95.4241% ( 71) 00:14:58.633 4.101 - 4.124: 95.8414% ( 56) 00:14:58.633 4.124 - 4.148: 96.1991% ( 48) 00:14:58.633 4.148 - 4.172: 96.4525% ( 34) 00:14:58.633 4.172 - 4.196: 96.5718% ( 16) 00:14:58.633 4.196 - 4.219: 96.6761% ( 14) 00:14:58.633 4.219 - 4.243: 96.7655% ( 12) 00:14:58.633 4.243 - 4.267: 96.8848% ( 16) 00:14:58.633 4.267 - 4.290: 96.9817% ( 13) 00:14:58.633 4.290 - 4.314: 97.0636% ( 11) 00:14:58.633 4.314 - 4.338: 97.1158% ( 7) 00:14:58.633 4.338 - 4.361: 97.1605% ( 6) 00:14:58.633 4.361 - 4.385: 97.1978% ( 5) 00:14:58.633 4.385 - 4.409: 97.2351% ( 5) 00:14:58.633 4.409 - 4.433: 97.2500% ( 2) 00:14:58.633 4.433 - 4.456: 97.2723% ( 3) 00:14:58.633 4.456 - 4.480: 97.2798% ( 1) 00:14:58.633 4.480 - 4.504: 97.2872% ( 1) 00:14:58.633 4.504 - 4.527: 97.3096% ( 3) 00:14:58.633 4.527 - 4.551: 97.3245% ( 2) 00:14:58.633 4.551 - 4.575: 97.3468% ( 3) 00:14:58.633 4.575 - 4.599: 97.3618% ( 2) 00:14:58.633 4.599 - 4.622: 97.3692% ( 1) 00:14:58.633 4.622 - 4.646: 97.4214% ( 7) 00:14:58.633 4.646 - 4.670: 97.4437% ( 3) 00:14:58.633 4.670 - 4.693: 97.5034% ( 8) 00:14:58.633 4.693 - 4.717: 97.5853% ( 11) 00:14:58.633 4.717 - 4.741: 97.6300% ( 6) 00:14:58.633 4.741 - 4.764: 97.6673% ( 5) 00:14:58.633 4.764 - 4.788: 97.7195% ( 7) 00:14:58.633 4.788 - 4.812: 97.7269% ( 1) 00:14:58.633 4.812 - 4.836: 97.7717% ( 6) 00:14:58.633 4.836 - 4.859: 97.7940% ( 3) 00:14:58.633 4.859 - 4.883: 97.8238% ( 4) 00:14:58.633 4.883 - 4.907: 97.8462% ( 3) 00:14:58.633 4.907 - 4.930: 97.8909% ( 6) 00:14:58.633 4.930 - 4.954: 97.9207% ( 4) 00:14:58.633 4.954 - 4.978: 97.9431% ( 3) 00:14:58.633 4.978 - 5.001: 97.9803% ( 5) 00:14:58.633 5.001 - 5.025: 97.9878% ( 1) 00:14:58.633 5.025 - 5.049: 98.0101% ( 3) 00:14:58.633 5.049 - 5.073: 98.0325% ( 3) 00:14:58.633 5.073 - 5.096: 98.0549% ( 3) 00:14:58.633 5.096 - 5.120: 98.0772% ( 3) 00:14:58.633 5.144 - 5.167: 98.0847% ( 1) 00:14:58.633 5.167 - 5.191: 98.1070% ( 3) 00:14:58.633 5.215 - 5.239: 98.1145% ( 1) 00:14:58.633 5.239 - 5.262: 98.1219% ( 1) 00:14:58.633 5.310 - 5.333: 98.1294% ( 1) 00:14:58.633 5.333 - 5.357: 98.1443% ( 2) 00:14:58.633 5.404 - 5.428: 98.1517% ( 1) 00:14:58.633 5.523 - 5.547: 98.1592% ( 1) 00:14:58.633 5.665 - 5.689: 98.1666% ( 1) 00:14:58.633 5.689 - 5.713: 98.1741% ( 1) 00:14:58.633 5.760 - 5.784: 98.1815% ( 1) 00:14:58.633 5.807 - 5.831: 98.1890% ( 1) 00:14:58.633 6.068 - 6.116: 98.1965% ( 1) 00:14:58.633 6.163 - 6.210: 98.2039% ( 1) 00:14:58.633 6.400 - 6.447: 98.2114% ( 1) 00:14:58.633 6.590 - 6.637: 98.2188% ( 1) 00:14:58.633 6.684 - 6.732: 98.2263% ( 1) 00:14:58.633 6.732 - 6.779: 98.2337% ( 1) 00:14:58.633 6.827 - 6.874: 98.2412% ( 1) 00:14:58.633 6.874 - 6.921: 98.2486% ( 1) 00:14:58.633 6.969 - 7.016: 98.2561% ( 1) 00:14:58.633 7.206 - 7.253: 98.2635% ( 1) 00:14:58.633 7.253 - 7.301: 98.2784% ( 2) 00:14:58.633 7.301 - 7.348: 98.2933% ( 2) 00:14:58.633 7.348 - 7.396: 98.3008% ( 1) 00:14:58.633 7.396 - 7.443: 98.3082% ( 1) 00:14:58.633 7.443 - 7.490: 98.3231% ( 2) 00:14:58.633 7.490 - 7.538: 98.3306% ( 1) 00:14:58.633 7.538 - 7.585: 98.3455% ( 2) 00:14:58.633 7.585 - 7.633: 98.3530% ( 1) 00:14:58.633 7.680 - 7.727: 98.3679% ( 2) 00:14:58.633 7.727 - 7.775: 98.3828% ( 2) 00:14:58.633 7.775 - 7.822: 98.4051% ( 3) 00:14:58.633 7.822 - 7.870: 98.4200% ( 2) 00:14:58.633 7.870 - 7.917: 98.4275% ( 1) 00:14:58.633 7.917 - 7.964: 98.4349% ( 1) 00:14:58.633 7.964 - 8.012: 98.4424% ( 1) 00:14:58.633 8.059 - 8.107: 98.4573% ( 2) 00:14:58.633 8.107 - 8.154: 98.4647% ( 1) 00:14:58.633 8.154 - 8.201: 98.4797% ( 2) 00:14:58.633 8.249 - 8.296: 98.4946% ( 2) 00:14:58.633 8.296 - 8.344: 98.5020% ( 1) 00:14:58.633 8.439 - 8.486: 98.5244% ( 3) 00:14:58.633 8.533 - 8.581: 98.5318% ( 1) 00:14:58.633 8.581 - 8.628: 98.5467% ( 2) 00:14:58.633 8.628 - 8.676: 98.5542% ( 1) 00:14:58.633 8.676 - 8.723: 98.5616% ( 1) 00:14:58.633 8.723 - 8.770: 98.5691% ( 1) 00:14:58.633 8.770 - 8.818: 98.5765% ( 1) 00:14:58.633 8.865 - 8.913: 98.5914% ( 2) 00:14:58.633 8.960 - 9.007: 98.5989% ( 1) 00:14:58.633 9.055 - 9.102: 98.6063% ( 1) 00:14:58.633 9.244 - 9.292: 98.6138% ( 1) 00:14:58.633 9.292 - 9.339: 98.6213% ( 1) 00:14:58.633 9.339 - 9.387: 98.6287% ( 1) 00:14:58.633 9.387 - 9.434: 98.6362% ( 1) 00:14:58.633 9.434 - 9.481: 98.6436% ( 1) 00:14:58.633 9.813 - 9.861: 98.6511% ( 1) 00:14:58.633 9.861 - 9.908: 98.6585% ( 1) 00:14:58.633 10.003 - 10.050: 98.6660% ( 1) 00:14:58.633 10.145 - 10.193: 98.6734% ( 1) 00:14:58.633 10.287 - 10.335: 98.6809% ( 1) 00:14:58.633 10.430 - 10.477: 98.6883% ( 1) 00:14:58.633 10.572 - 10.619: 98.6958% ( 1) 00:14:58.633 11.141 - 11.188: 98.7032% ( 1) 00:14:58.633 11.188 - 11.236: 98.7107% ( 1) 00:14:58.633 11.520 - 11.567: 98.7181% ( 1) 00:14:58.633 11.567 - 11.615: 98.7405% ( 3) 00:14:58.633 11.662 - 11.710: 98.7480% ( 1) 00:14:58.633 11.710 - 11.757: 98.7629% ( 2) 00:14:58.633 11.804 - 11.852: 98.7778% ( 2) 00:14:58.633 11.852 - 11.899: 98.7927% ( 2) 00:14:58.633 12.041 - 12.089: 98.8001% ( 1) 00:14:58.633 12.136 - 12.231: 98.8076% ( 1) 00:14:58.633 12.231 - 12.326: 98.8150% ( 1) 00:14:58.633 12.326 - 12.421: 98.8225% ( 1) 00:14:58.633 12.421 - 12.516: 98.8299% ( 1) 00:14:58.633 12.705 - 12.800: 98.8374% ( 1) 00:14:58.633 13.084 - 13.179: 98.8448% ( 1) 00:14:58.633 13.369 - 13.464: 98.8523% ( 1) 00:14:58.633 13.559 - 13.653: 98.8672% ( 2) 00:14:58.633 13.653 - 13.748: 98.8746% ( 1) 00:14:58.633 13.843 - 13.938: 98.8896% ( 2) 00:14:58.633 14.033 - 14.127: 98.8970% ( 1) 00:14:58.633 14.222 - 14.317: 98.9045% ( 1) 00:14:58.633 14.317 - 14.412: 98.9194% ( 2) 00:14:58.633 14.886 - 14.981: 98.9268% ( 1) 00:14:58.633 15.076 - 15.170: 98.9343% ( 1) 00:14:58.633 15.265 - 15.360: 98.9417% ( 1) 00:14:58.633 15.550 - 15.644: 98.9492% ( 1) 00:14:58.633 17.256 - 17.351: 98.9790% ( 4) 00:14:58.633 17.351 - 17.446: 99.0162% ( 5) 00:14:58.633 17.446 - 17.541: 99.0461% ( 4) 00:14:58.633 17.541 - 17.636: 99.0908% ( 6) 00:14:58.633 17.636 - 17.730: 99.1653% ( 10) 00:14:58.633 17.730 - 17.825: 99.1951% ( 4) 00:14:58.633 17.825 - 17.920: 99.2473% ( 7) 00:14:58.633 17.920 - 18.015: 99.2920% ( 6) 00:14:58.633 18.015 - 18.110: 99.3367% ( 6) 00:14:58.633 18.110 - 18.204: 99.3889% ( 7) 00:14:58.633 18.204 - 18.299: 99.4410% ( 7) 00:14:58.633 18.299 - 18.394: 99.5081% ( 9) 00:14:58.633 18.394 - 18.489: 99.5528% ( 6) 00:14:58.633 18.489 - 18.584: 99.5901% ( 5) 00:14:58.633 18.584 - 18.679: 99.6572% ( 9) 00:14:58.633 18.679 - 18.773: 99.6944% ( 5) 00:14:58.633 18.773 - 18.868: 99.7168% ( 3) 00:14:58.633 18.963 - 19.058: 99.7392% ( 3) 00:14:58.633 19.058 - 19.153: 99.7615% ( 3) 00:14:58.633 19.247 - 19.342: 99.7764% ( 2) 00:14:58.633 19.342 - 19.437: 99.8062% ( 4) 00:14:58.633 19.627 - 19.721: 99.8137% ( 1) 00:14:58.633 20.006 - 20.101: 99.8211% ( 1) 00:14:58.633 20.385 - 20.480: 99.8286% ( 1) 00:14:58.633 20.480 - 20.575: 99.8360% ( 1) 00:14:58.633 23.419 - 23.514: 99.8435% ( 1) 00:14:58.633 25.600 - 25.790: 99.8509% ( 1) 00:14:58.633 27.876 - 28.065: 99.8584% ( 1) 00:14:58.633 29.393 - 29.582: 99.8659% ( 1) 00:14:58.633 32.616 - 32.806: 99.8733% ( 1) 00:14:58.633 1541.310 - 1547.378: 99.8808% ( 1) 00:14:58.633 3980.705 - 4004.978: 99.9776% ( 13) 00:14:58.633 4004.978 - 4029.250: 100.0000% ( 3) 00:14:58.633 00:14:58.633 Complete histogram 00:14:58.633 ================== 00:14:58.633 Range in us Cumulative Count 00:14:58.633 2.050 - 2.062: 0.0075% ( 1) 00:14:58.633 2.062 - 2.074: 14.2123% ( 1906) 00:14:58.634 2.074 - 2.086: 35.9517% ( 2917) 00:14:58.634 2.086 - 2.098: 38.5378% ( 347) 00:14:58.634 2.098 - 2.110: 51.4235% ( 1729) 00:14:58.634 2.110 - 2.121: 58.2352% ( 914) 00:14:58.634 2.121 - 2.133: 60.4785% ( 301) 00:14:58.634 2.133 - 2.145: 71.6798% ( 1503) 00:14:58.634 2.145 - 2.157: 76.8445% ( 693) 00:14:58.634 2.157 - 2.169: 78.7748% ( 259) 00:14:58.634 2.169 - 2.181: 84.3643% ( 750) 00:14:58.634 2.181 - 2.193: 86.7939% ( 326) 00:14:58.634 2.193 - 2.204: 87.6211% ( 111) 00:14:58.634 2.204 - 2.216: 89.7973% ( 292) 00:14:58.634 2.216 - 2.228: 91.2357% ( 193) 00:14:58.634 2.228 - 2.240: 93.1361% ( 255) 00:14:58.634 2.240 - 2.252: 94.7011% ( 210) 00:14:58.634 2.252 - 2.264: 95.0961% ( 53) 00:14:58.634 2.264 - 2.276: 95.2303% ( 18) 00:14:58.634 2.276 - 2.287: 95.3644% ( 18) 00:14:58.634 2.287 - 2.299: 95.6476% ( 38) 00:14:58.634 2.299 - 2.311: 96.0575% ( 55) 00:14:58.634 2.311 - 2.323: 96.2439% ( 25) 00:14:58.634 2.323 - 2.335: 96.2960% ( 7) 00:14:58.634 2.335 - 2.347: 96.3705% ( 10) 00:14:58.634 2.347 - 2.359: 96.5867% ( 29) 00:14:58.634 2.359 - 2.370: 97.0935% ( 68) 00:14:58.634 2.370 - 2.382: 97.4810% ( 52) 00:14:58.634 2.382 - 2.394: 97.8164% ( 45) 00:14:58.634 2.394 - 2.406: 98.0399% ( 30) 00:14:58.634 2.406 - 2.418: 98.1368% ( 13) 00:14:58.634 2.418 - 2.430: 98.2114% ( 10) 00:14:58.634 2.430 - 2.441: 98.3231% ( 15) 00:14:58.634 2.441 - 2.453: 98.3977% ( 10) 00:14:58.634 2.453 - 2.465: 98.4424% ( 6) 00:14:58.634 2.465 - 2.477: 98.5020% ( 8) 00:14:58.634 2.477 - 2.489: 98.5244% ( 3) 00:14:58.634 2.489 - 2.501: 98.5467% ( 3) 00:14:58.634 2.501 - 2.513: 98.5616% ( 2) 00:14:58.634 2.524 - 2.536: 98.5691% ( 1) 00:14:58.634 2.548 - 2.560: 98.5765% ( 1) 00:14:58.634 2.560 - 2.572: 98.5840% ( 1) 00:14:58.634 2.572 - 2.584: 98.5914% ( 1) 00:14:58.634 2.584 - 2.596: 98.6063% ( 2) 00:14:58.634 2.596 - 2.607: 98.6287% ( 3) 00:14:58.634 2.607 - 2.619: 98.6362% ( 1) 00:14:58.634 2.619 - 2.631: 98.6511% ( 2) 00:14:58.634 2.631 - 2.643: 98.6660% ( 2) 00:14:58.634 2.643 - 2.655: 98.6809% ( 2) 00:14:58.634 2.679 - 2.690: 98.6883% ( 1) 00:14:58.634 2.690 - 2.702: 98.6958% ( 1) 00:14:58.634 2.702 - 2.714: 98.7032% ( 1) 00:14:58.634 2.738 - 2.750: 98.7107% ( 1) 00:14:58.634 2.904 - 2.916: 98.7181% ( 1) 00:14:58.634 3.153 - 3.176: 98.7256% ( 1) 00:14:58.634 3.200 - 3.224: 98.7330% ( 1) 00:14:58.634 3.295 - 3.319: 98.7480% ( 2) 00:14:58.634 3.319 - 3.342: 98.7778% ( 4) 00:14:58.634 3.342 - 3.366: 98.7852% ( 1) 00:14:58.634 3.366 - 3.390: 9[2024-07-20 17:50:33.275328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:58.634 8.7927% ( 1) 00:14:58.634 3.413 - 3.437: 98.8076% ( 2) 00:14:58.634 3.437 - 3.461: 98.8150% ( 1) 00:14:58.634 3.484 - 3.508: 98.8225% ( 1) 00:14:58.634 3.532 - 3.556: 98.8299% ( 1) 00:14:58.634 3.556 - 3.579: 98.8448% ( 2) 00:14:58.634 3.579 - 3.603: 98.8523% ( 1) 00:14:58.634 3.603 - 3.627: 98.8672% ( 2) 00:14:58.634 3.627 - 3.650: 98.8746% ( 1) 00:14:58.634 3.650 - 3.674: 98.8821% ( 1) 00:14:58.634 3.698 - 3.721: 98.8896% ( 1) 00:14:58.634 3.721 - 3.745: 98.8970% ( 1) 00:14:58.634 3.745 - 3.769: 98.9045% ( 1) 00:14:58.634 3.769 - 3.793: 98.9119% ( 1) 00:14:58.634 3.911 - 3.935: 98.9194% ( 1) 00:14:58.634 4.124 - 4.148: 98.9268% ( 1) 00:14:58.634 4.243 - 4.267: 98.9343% ( 1) 00:14:58.634 5.144 - 5.167: 98.9417% ( 1) 00:14:58.634 5.191 - 5.215: 98.9492% ( 1) 00:14:58.634 5.310 - 5.333: 98.9566% ( 1) 00:14:58.634 5.357 - 5.381: 98.9641% ( 1) 00:14:58.634 5.381 - 5.404: 98.9715% ( 1) 00:14:58.634 5.404 - 5.428: 98.9790% ( 1) 00:14:58.634 5.997 - 6.021: 98.9864% ( 1) 00:14:58.634 6.116 - 6.163: 98.9939% ( 1) 00:14:58.634 6.258 - 6.305: 99.0013% ( 1) 00:14:58.634 6.495 - 6.542: 99.0088% ( 1) 00:14:58.634 6.542 - 6.590: 99.0162% ( 1) 00:14:58.634 6.590 - 6.637: 99.0237% ( 1) 00:14:58.634 6.637 - 6.684: 99.0312% ( 1) 00:14:58.634 6.684 - 6.732: 99.0386% ( 1) 00:14:58.634 7.064 - 7.111: 99.0461% ( 1) 00:14:58.634 8.439 - 8.486: 99.0535% ( 1) 00:14:58.634 11.852 - 11.899: 99.0610% ( 1) 00:14:58.634 13.274 - 13.369: 99.0684% ( 1) 00:14:58.634 15.455 - 15.550: 99.0759% ( 1) 00:14:58.634 15.644 - 15.739: 99.0908% ( 2) 00:14:58.634 15.739 - 15.834: 99.0982% ( 1) 00:14:58.634 15.834 - 15.929: 99.1057% ( 1) 00:14:58.634 15.929 - 16.024: 99.1280% ( 3) 00:14:58.634 16.024 - 16.119: 99.1578% ( 4) 00:14:58.634 16.213 - 16.308: 99.1877% ( 4) 00:14:58.634 16.308 - 16.403: 99.2026% ( 2) 00:14:58.634 16.403 - 16.498: 99.2547% ( 7) 00:14:58.634 16.498 - 16.593: 99.2920% ( 5) 00:14:58.634 16.593 - 16.687: 99.2994% ( 1) 00:14:58.634 16.782 - 16.877: 99.3218% ( 3) 00:14:58.634 16.877 - 16.972: 99.3442% ( 3) 00:14:58.634 16.972 - 17.067: 99.3591% ( 2) 00:14:58.634 17.067 - 17.161: 99.3665% ( 1) 00:14:58.634 17.161 - 17.256: 99.3814% ( 2) 00:14:58.634 17.256 - 17.351: 99.3889% ( 1) 00:14:58.634 17.351 - 17.446: 99.3963% ( 1) 00:14:58.634 17.825 - 17.920: 99.4038% ( 1) 00:14:58.634 18.489 - 18.584: 99.4112% ( 1) 00:14:58.634 20.764 - 20.859: 99.4187% ( 1) 00:14:58.634 25.221 - 25.410: 99.4261% ( 1) 00:14:58.634 26.169 - 26.359: 99.4336% ( 1) 00:14:58.634 3980.705 - 4004.978: 99.9404% ( 68) 00:14:58.634 4004.978 - 4029.250: 99.9925% ( 7) 00:14:58.634 5995.330 - 6019.603: 100.0000% ( 1) 00:14:58.634 00:14:58.634 17:50:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:58.634 17:50:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:58.634 17:50:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:58.634 17:50:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:58.634 17:50:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:58.892 [ 00:14:58.892 { 00:14:58.892 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:58.892 "subtype": "Discovery", 00:14:58.892 "listen_addresses": [], 00:14:58.892 "allow_any_host": true, 00:14:58.892 "hosts": [] 00:14:58.892 }, 00:14:58.892 { 00:14:58.892 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:58.892 "subtype": "NVMe", 00:14:58.892 "listen_addresses": [ 00:14:58.892 { 00:14:58.892 "trtype": "VFIOUSER", 00:14:58.892 "adrfam": "IPv4", 00:14:58.892 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:58.892 "trsvcid": "0" 00:14:58.892 } 00:14:58.892 ], 00:14:58.892 "allow_any_host": true, 00:14:58.892 "hosts": [], 00:14:58.892 "serial_number": "SPDK1", 00:14:58.892 "model_number": "SPDK bdev Controller", 00:14:58.892 "max_namespaces": 32, 00:14:58.892 "min_cntlid": 1, 00:14:58.892 "max_cntlid": 65519, 00:14:58.892 "namespaces": [ 00:14:58.892 { 00:14:58.892 "nsid": 1, 00:14:58.892 "bdev_name": "Malloc1", 00:14:58.892 "name": "Malloc1", 00:14:58.892 "nguid": "BAB240E860C846DA88E41C5AED41472A", 00:14:58.892 "uuid": "bab240e8-60c8-46da-88e4-1c5aed41472a" 00:14:58.892 } 00:14:58.892 ] 00:14:58.892 }, 00:14:58.892 { 00:14:58.892 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:58.892 "subtype": "NVMe", 00:14:58.892 "listen_addresses": [ 00:14:58.892 { 00:14:58.892 "trtype": "VFIOUSER", 00:14:58.892 "adrfam": "IPv4", 00:14:58.892 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:58.892 "trsvcid": "0" 00:14:58.892 } 00:14:58.892 ], 00:14:58.892 "allow_any_host": true, 00:14:58.892 "hosts": [], 00:14:58.892 "serial_number": "SPDK2", 00:14:58.892 "model_number": "SPDK bdev Controller", 00:14:58.892 "max_namespaces": 32, 00:14:58.892 "min_cntlid": 1, 00:14:58.892 "max_cntlid": 65519, 00:14:58.892 "namespaces": [ 00:14:58.892 { 00:14:58.892 "nsid": 1, 00:14:58.892 "bdev_name": "Malloc2", 00:14:58.892 "name": "Malloc2", 00:14:58.892 "nguid": "3EA65E4B7C564B16B6F894ECF4F14205", 00:14:58.892 "uuid": "3ea65e4b-7c56-4b16-b6f8-94ecf4f14205" 00:14:58.892 } 00:14:58.892 ] 00:14:58.892 } 00:14:58.892 ] 00:14:58.892 17:50:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:58.892 17:50:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=910597 00:14:58.892 17:50:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:58.892 17:50:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:58.892 17:50:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:14:58.892 17:50:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:58.892 17:50:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:58.892 17:50:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:14:58.892 17:50:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:58.892 17:50:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:58.892 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.150 [2024-07-20 17:50:33.775230] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.150 Malloc3 00:14:59.150 17:50:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:59.407 [2024-07-20 17:50:34.133110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.407 17:50:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:59.407 Asynchronous Event Request test 00:14:59.407 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.407 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.407 Registering asynchronous event callbacks... 00:14:59.407 Starting namespace attribute notice tests for all controllers... 00:14:59.407 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:59.407 aer_cb - Changed Namespace 00:14:59.407 Cleaning up... 00:14:59.664 [ 00:14:59.664 { 00:14:59.664 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:59.664 "subtype": "Discovery", 00:14:59.664 "listen_addresses": [], 00:14:59.664 "allow_any_host": true, 00:14:59.664 "hosts": [] 00:14:59.664 }, 00:14:59.664 { 00:14:59.664 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:59.664 "subtype": "NVMe", 00:14:59.664 "listen_addresses": [ 00:14:59.664 { 00:14:59.664 "trtype": "VFIOUSER", 00:14:59.664 "adrfam": "IPv4", 00:14:59.664 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:59.664 "trsvcid": "0" 00:14:59.664 } 00:14:59.664 ], 00:14:59.664 "allow_any_host": true, 00:14:59.664 "hosts": [], 00:14:59.664 "serial_number": "SPDK1", 00:14:59.664 "model_number": "SPDK bdev Controller", 00:14:59.664 "max_namespaces": 32, 00:14:59.664 "min_cntlid": 1, 00:14:59.664 "max_cntlid": 65519, 00:14:59.664 "namespaces": [ 00:14:59.664 { 00:14:59.664 "nsid": 1, 00:14:59.664 "bdev_name": "Malloc1", 00:14:59.664 "name": "Malloc1", 00:14:59.664 "nguid": "BAB240E860C846DA88E41C5AED41472A", 00:14:59.664 "uuid": "bab240e8-60c8-46da-88e4-1c5aed41472a" 00:14:59.664 }, 00:14:59.664 { 00:14:59.664 "nsid": 2, 00:14:59.664 "bdev_name": "Malloc3", 00:14:59.664 "name": "Malloc3", 00:14:59.664 "nguid": "212A6EC9A0E3484DABEAC3F07C922CE2", 00:14:59.664 "uuid": "212a6ec9-a0e3-484d-abea-c3f07c922ce2" 00:14:59.664 } 00:14:59.664 ] 00:14:59.664 }, 00:14:59.664 { 00:14:59.664 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:59.664 "subtype": "NVMe", 00:14:59.664 "listen_addresses": [ 00:14:59.664 { 00:14:59.664 "trtype": "VFIOUSER", 00:14:59.664 "adrfam": "IPv4", 00:14:59.664 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:59.664 "trsvcid": "0" 00:14:59.664 } 00:14:59.664 ], 00:14:59.664 "allow_any_host": true, 00:14:59.664 "hosts": [], 00:14:59.664 "serial_number": "SPDK2", 00:14:59.664 "model_number": "SPDK bdev Controller", 00:14:59.664 "max_namespaces": 32, 00:14:59.664 "min_cntlid": 1, 00:14:59.664 "max_cntlid": 65519, 00:14:59.664 "namespaces": [ 00:14:59.664 { 00:14:59.664 "nsid": 1, 00:14:59.664 "bdev_name": "Malloc2", 00:14:59.664 "name": "Malloc2", 00:14:59.664 "nguid": "3EA65E4B7C564B16B6F894ECF4F14205", 00:14:59.664 "uuid": "3ea65e4b-7c56-4b16-b6f8-94ecf4f14205" 00:14:59.664 } 00:14:59.664 ] 00:14:59.664 } 00:14:59.664 ] 00:14:59.664 17:50:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 910597 00:14:59.664 17:50:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.664 17:50:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:59.664 17:50:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:59.664 17:50:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:59.664 [2024-07-20 17:50:34.395584] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:59.664 [2024-07-20 17:50:34.395622] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910618 ] 00:14:59.664 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.664 [2024-07-20 17:50:34.428687] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:59.664 [2024-07-20 17:50:34.437102] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:59.664 [2024-07-20 17:50:34.437146] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa983129000 00:14:59.664 [2024-07-20 17:50:34.438116] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.664 [2024-07-20 17:50:34.439123] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.664 [2024-07-20 17:50:34.440132] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.664 [2024-07-20 17:50:34.441142] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.664 [2024-07-20 17:50:34.442145] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.664 [2024-07-20 17:50:34.443142] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.664 [2024-07-20 17:50:34.444151] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.664 [2024-07-20 17:50:34.445158] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.664 [2024-07-20 17:50:34.446168] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:59.664 [2024-07-20 17:50:34.446189] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa981edf000 00:14:59.664 [2024-07-20 17:50:34.447304] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:59.923 [2024-07-20 17:50:34.461505] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:59.923 [2024-07-20 17:50:34.461535] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:59.923 [2024-07-20 17:50:34.466653] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:59.923 [2024-07-20 17:50:34.466702] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:59.923 [2024-07-20 17:50:34.466801] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:59.923 [2024-07-20 17:50:34.466826] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:59.923 [2024-07-20 17:50:34.466840] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:59.923 [2024-07-20 17:50:34.467656] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:59.923 [2024-07-20 17:50:34.467682] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:59.923 [2024-07-20 17:50:34.467695] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:59.923 [2024-07-20 17:50:34.468669] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:59.923 [2024-07-20 17:50:34.468689] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:59.923 [2024-07-20 17:50:34.468702] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:59.923 [2024-07-20 17:50:34.469681] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:59.923 [2024-07-20 17:50:34.469702] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:59.923 [2024-07-20 17:50:34.470685] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:59.923 [2024-07-20 17:50:34.470704] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:59.923 [2024-07-20 17:50:34.470713] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:59.923 [2024-07-20 17:50:34.470724] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:59.923 [2024-07-20 17:50:34.470835] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:59.923 [2024-07-20 17:50:34.470845] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:59.923 [2024-07-20 17:50:34.470853] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:59.923 [2024-07-20 17:50:34.471692] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:59.923 [2024-07-20 17:50:34.472697] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:59.923 [2024-07-20 17:50:34.473709] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:59.923 [2024-07-20 17:50:34.474710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:59.923 [2024-07-20 17:50:34.474804] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:59.923 [2024-07-20 17:50:34.475731] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:59.923 [2024-07-20 17:50:34.475750] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:59.923 [2024-07-20 17:50:34.475760] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.475811] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:59.923 [2024-07-20 17:50:34.475832] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.475856] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.923 [2024-07-20 17:50:34.475866] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.923 [2024-07-20 17:50:34.475884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.923 [2024-07-20 17:50:34.483810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:59.923 [2024-07-20 17:50:34.483836] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:59.923 [2024-07-20 17:50:34.483847] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:59.923 [2024-07-20 17:50:34.483855] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:59.923 [2024-07-20 17:50:34.483863] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:59.923 [2024-07-20 17:50:34.483871] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:59.923 [2024-07-20 17:50:34.483879] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:59.923 [2024-07-20 17:50:34.483887] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.483900] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.483915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:59.923 [2024-07-20 17:50:34.491805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:59.923 [2024-07-20 17:50:34.491830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.923 [2024-07-20 17:50:34.491843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.923 [2024-07-20 17:50:34.491856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.923 [2024-07-20 17:50:34.491868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.923 [2024-07-20 17:50:34.491877] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.491893] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.491909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:59.923 [2024-07-20 17:50:34.499819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:59.923 [2024-07-20 17:50:34.499849] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:59.923 [2024-07-20 17:50:34.499858] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.499874] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.499889] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.499904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:59.923 [2024-07-20 17:50:34.507818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:59.923 [2024-07-20 17:50:34.507904] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.507921] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.507935] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:59.923 [2024-07-20 17:50:34.507943] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:59.923 [2024-07-20 17:50:34.507953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:59.923 [2024-07-20 17:50:34.515818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:59.923 [2024-07-20 17:50:34.515847] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:59.923 [2024-07-20 17:50:34.515863] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.515877] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.515890] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.923 [2024-07-20 17:50:34.515898] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.923 [2024-07-20 17:50:34.515909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.923 [2024-07-20 17:50:34.523818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:59.923 [2024-07-20 17:50:34.523857] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.523873] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.523886] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.923 [2024-07-20 17:50:34.523896] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.923 [2024-07-20 17:50:34.523906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.923 [2024-07-20 17:50:34.531820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:59.923 [2024-07-20 17:50:34.531842] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.531855] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.531874] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.531886] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.531894] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.531903] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:59.923 [2024-07-20 17:50:34.531911] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:59.923 [2024-07-20 17:50:34.531919] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:59.923 [2024-07-20 17:50:34.531946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:59.923 [2024-07-20 17:50:34.539806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:59.923 [2024-07-20 17:50:34.539834] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:59.923 [2024-07-20 17:50:34.547806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:59.923 [2024-07-20 17:50:34.547831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:59.923 [2024-07-20 17:50:34.555822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:59.923 [2024-07-20 17:50:34.555847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:59.923 [2024-07-20 17:50:34.563818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:59.923 [2024-07-20 17:50:34.563845] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:59.923 [2024-07-20 17:50:34.563855] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:59.923 [2024-07-20 17:50:34.563862] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:59.923 [2024-07-20 17:50:34.563868] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:59.923 [2024-07-20 17:50:34.563878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:59.923 [2024-07-20 17:50:34.563890] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:59.923 [2024-07-20 17:50:34.563898] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:59.923 [2024-07-20 17:50:34.563907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:59.923 [2024-07-20 17:50:34.563918] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:59.924 [2024-07-20 17:50:34.563926] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.924 [2024-07-20 17:50:34.563934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.924 [2024-07-20 17:50:34.563946] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:59.924 [2024-07-20 17:50:34.563954] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:59.924 [2024-07-20 17:50:34.563967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:59.924 [2024-07-20 17:50:34.571807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:59.924 [2024-07-20 17:50:34.571834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:59.924 [2024-07-20 17:50:34.571850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:59.924 [2024-07-20 17:50:34.571864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:59.924 ===================================================== 00:14:59.924 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:59.924 ===================================================== 00:14:59.924 Controller Capabilities/Features 00:14:59.924 ================================ 00:14:59.924 Vendor ID: 4e58 00:14:59.924 Subsystem Vendor ID: 4e58 00:14:59.924 Serial Number: SPDK2 00:14:59.924 Model Number: SPDK bdev Controller 00:14:59.924 Firmware Version: 24.05.1 00:14:59.924 Recommended Arb Burst: 6 00:14:59.924 IEEE OUI Identifier: 8d 6b 50 00:14:59.924 Multi-path I/O 00:14:59.924 May have multiple subsystem ports: Yes 00:14:59.924 May have multiple controllers: Yes 00:14:59.924 Associated with SR-IOV VF: No 00:14:59.924 Max Data Transfer Size: 131072 00:14:59.924 Max Number of Namespaces: 32 00:14:59.924 Max Number of I/O Queues: 127 00:14:59.924 NVMe Specification Version (VS): 1.3 00:14:59.924 NVMe Specification Version (Identify): 1.3 00:14:59.924 Maximum Queue Entries: 256 00:14:59.924 Contiguous Queues Required: Yes 00:14:59.924 Arbitration Mechanisms Supported 00:14:59.924 Weighted Round Robin: Not Supported 00:14:59.924 Vendor Specific: Not Supported 00:14:59.924 Reset Timeout: 15000 ms 00:14:59.924 Doorbell Stride: 4 bytes 00:14:59.924 NVM Subsystem Reset: Not Supported 00:14:59.924 Command Sets Supported 00:14:59.924 NVM Command Set: Supported 00:14:59.924 Boot Partition: Not Supported 00:14:59.924 Memory Page Size Minimum: 4096 bytes 00:14:59.924 Memory Page Size Maximum: 4096 bytes 00:14:59.924 Persistent Memory Region: Not Supported 00:14:59.924 Optional Asynchronous Events Supported 00:14:59.924 Namespace Attribute Notices: Supported 00:14:59.924 Firmware Activation Notices: Not Supported 00:14:59.924 ANA Change Notices: Not Supported 00:14:59.924 PLE Aggregate Log Change Notices: Not Supported 00:14:59.924 LBA Status Info Alert Notices: Not Supported 00:14:59.924 EGE Aggregate Log Change Notices: Not Supported 00:14:59.924 Normal NVM Subsystem Shutdown event: Not Supported 00:14:59.924 Zone Descriptor Change Notices: Not Supported 00:14:59.924 Discovery Log Change Notices: Not Supported 00:14:59.924 Controller Attributes 00:14:59.924 128-bit Host Identifier: Supported 00:14:59.924 Non-Operational Permissive Mode: Not Supported 00:14:59.924 NVM Sets: Not Supported 00:14:59.924 Read Recovery Levels: Not Supported 00:14:59.924 Endurance Groups: Not Supported 00:14:59.924 Predictable Latency Mode: Not Supported 00:14:59.924 Traffic Based Keep ALive: Not Supported 00:14:59.924 Namespace Granularity: Not Supported 00:14:59.924 SQ Associations: Not Supported 00:14:59.924 UUID List: Not Supported 00:14:59.924 Multi-Domain Subsystem: Not Supported 00:14:59.924 Fixed Capacity Management: Not Supported 00:14:59.924 Variable Capacity Management: Not Supported 00:14:59.924 Delete Endurance Group: Not Supported 00:14:59.924 Delete NVM Set: Not Supported 00:14:59.924 Extended LBA Formats Supported: Not Supported 00:14:59.924 Flexible Data Placement Supported: Not Supported 00:14:59.924 00:14:59.924 Controller Memory Buffer Support 00:14:59.924 ================================ 00:14:59.924 Supported: No 00:14:59.924 00:14:59.924 Persistent Memory Region Support 00:14:59.924 ================================ 00:14:59.924 Supported: No 00:14:59.924 00:14:59.924 Admin Command Set Attributes 00:14:59.924 ============================ 00:14:59.924 Security Send/Receive: Not Supported 00:14:59.924 Format NVM: Not Supported 00:14:59.924 Firmware Activate/Download: Not Supported 00:14:59.924 Namespace Management: Not Supported 00:14:59.924 Device Self-Test: Not Supported 00:14:59.924 Directives: Not Supported 00:14:59.924 NVMe-MI: Not Supported 00:14:59.924 Virtualization Management: Not Supported 00:14:59.924 Doorbell Buffer Config: Not Supported 00:14:59.924 Get LBA Status Capability: Not Supported 00:14:59.924 Command & Feature Lockdown Capability: Not Supported 00:14:59.924 Abort Command Limit: 4 00:14:59.924 Async Event Request Limit: 4 00:14:59.924 Number of Firmware Slots: N/A 00:14:59.924 Firmware Slot 1 Read-Only: N/A 00:14:59.924 Firmware Activation Without Reset: N/A 00:14:59.924 Multiple Update Detection Support: N/A 00:14:59.924 Firmware Update Granularity: No Information Provided 00:14:59.924 Per-Namespace SMART Log: No 00:14:59.924 Asymmetric Namespace Access Log Page: Not Supported 00:14:59.924 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:59.924 Command Effects Log Page: Supported 00:14:59.924 Get Log Page Extended Data: Supported 00:14:59.924 Telemetry Log Pages: Not Supported 00:14:59.924 Persistent Event Log Pages: Not Supported 00:14:59.924 Supported Log Pages Log Page: May Support 00:14:59.924 Commands Supported & Effects Log Page: Not Supported 00:14:59.924 Feature Identifiers & Effects Log Page:May Support 00:14:59.924 NVMe-MI Commands & Effects Log Page: May Support 00:14:59.924 Data Area 4 for Telemetry Log: Not Supported 00:14:59.924 Error Log Page Entries Supported: 128 00:14:59.924 Keep Alive: Supported 00:14:59.924 Keep Alive Granularity: 10000 ms 00:14:59.924 00:14:59.924 NVM Command Set Attributes 00:14:59.924 ========================== 00:14:59.924 Submission Queue Entry Size 00:14:59.924 Max: 64 00:14:59.924 Min: 64 00:14:59.924 Completion Queue Entry Size 00:14:59.924 Max: 16 00:14:59.924 Min: 16 00:14:59.924 Number of Namespaces: 32 00:14:59.924 Compare Command: Supported 00:14:59.924 Write Uncorrectable Command: Not Supported 00:14:59.924 Dataset Management Command: Supported 00:14:59.924 Write Zeroes Command: Supported 00:14:59.924 Set Features Save Field: Not Supported 00:14:59.924 Reservations: Not Supported 00:14:59.924 Timestamp: Not Supported 00:14:59.924 Copy: Supported 00:14:59.924 Volatile Write Cache: Present 00:14:59.924 Atomic Write Unit (Normal): 1 00:14:59.924 Atomic Write Unit (PFail): 1 00:14:59.924 Atomic Compare & Write Unit: 1 00:14:59.924 Fused Compare & Write: Supported 00:14:59.924 Scatter-Gather List 00:14:59.924 SGL Command Set: Supported (Dword aligned) 00:14:59.924 SGL Keyed: Not Supported 00:14:59.924 SGL Bit Bucket Descriptor: Not Supported 00:14:59.924 SGL Metadata Pointer: Not Supported 00:14:59.924 Oversized SGL: Not Supported 00:14:59.924 SGL Metadata Address: Not Supported 00:14:59.924 SGL Offset: Not Supported 00:14:59.924 Transport SGL Data Block: Not Supported 00:14:59.924 Replay Protected Memory Block: Not Supported 00:14:59.924 00:14:59.924 Firmware Slot Information 00:14:59.924 ========================= 00:14:59.924 Active slot: 1 00:14:59.924 Slot 1 Firmware Revision: 24.05.1 00:14:59.924 00:14:59.924 00:14:59.924 Commands Supported and Effects 00:14:59.924 ============================== 00:14:59.924 Admin Commands 00:14:59.924 -------------- 00:14:59.924 Get Log Page (02h): Supported 00:14:59.924 Identify (06h): Supported 00:14:59.924 Abort (08h): Supported 00:14:59.924 Set Features (09h): Supported 00:14:59.924 Get Features (0Ah): Supported 00:14:59.924 Asynchronous Event Request (0Ch): Supported 00:14:59.924 Keep Alive (18h): Supported 00:14:59.924 I/O Commands 00:14:59.924 ------------ 00:14:59.924 Flush (00h): Supported LBA-Change 00:14:59.924 Write (01h): Supported LBA-Change 00:14:59.924 Read (02h): Supported 00:14:59.924 Compare (05h): Supported 00:14:59.924 Write Zeroes (08h): Supported LBA-Change 00:14:59.924 Dataset Management (09h): Supported LBA-Change 00:14:59.924 Copy (19h): Supported LBA-Change 00:14:59.924 Unknown (79h): Supported LBA-Change 00:14:59.924 Unknown (7Ah): Supported 00:14:59.924 00:14:59.924 Error Log 00:14:59.924 ========= 00:14:59.924 00:14:59.924 Arbitration 00:14:59.924 =========== 00:14:59.924 Arbitration Burst: 1 00:14:59.924 00:14:59.924 Power Management 00:14:59.924 ================ 00:14:59.924 Number of Power States: 1 00:14:59.924 Current Power State: Power State #0 00:14:59.924 Power State #0: 00:14:59.924 Max Power: 0.00 W 00:14:59.924 Non-Operational State: Operational 00:14:59.924 Entry Latency: Not Reported 00:14:59.924 Exit Latency: Not Reported 00:14:59.924 Relative Read Throughput: 0 00:14:59.924 Relative Read Latency: 0 00:14:59.924 Relative Write Throughput: 0 00:14:59.924 Relative Write Latency: 0 00:14:59.924 Idle Power: Not Reported 00:14:59.924 Active Power: Not Reported 00:14:59.924 Non-Operational Permissive Mode: Not Supported 00:14:59.924 00:14:59.924 Health Information 00:14:59.924 ================== 00:14:59.924 Critical Warnings: 00:14:59.924 Available Spare Space: OK 00:14:59.924 Temperature: OK 00:14:59.924 Device Reliability: OK 00:14:59.924 Read Only: No 00:14:59.924 Volatile Memory Backup: OK 00:14:59.924 Current Temperature: 0 Kelvin[2024-07-20 17:50:34.571986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:59.924 [2024-07-20 17:50:34.579807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:59.924 [2024-07-20 17:50:34.579851] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:59.924 [2024-07-20 17:50:34.579868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.924 [2024-07-20 17:50:34.579880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.924 [2024-07-20 17:50:34.579890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.924 [2024-07-20 17:50:34.579900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.924 [2024-07-20 17:50:34.579985] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:59.924 [2024-07-20 17:50:34.580006] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:59.924 [2024-07-20 17:50:34.580986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:59.924 [2024-07-20 17:50:34.581056] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:59.924 [2024-07-20 17:50:34.581070] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:59.924 [2024-07-20 17:50:34.581991] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:59.924 [2024-07-20 17:50:34.582015] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:59.924 [2024-07-20 17:50:34.582066] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:59.924 [2024-07-20 17:50:34.583254] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:59.924 (-273 Celsius) 00:14:59.924 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:59.924 Available Spare: 0% 00:14:59.924 Available Spare Threshold: 0% 00:14:59.924 Life Percentage Used: 0% 00:14:59.924 Data Units Read: 0 00:14:59.924 Data Units Written: 0 00:14:59.924 Host Read Commands: 0 00:14:59.924 Host Write Commands: 0 00:14:59.924 Controller Busy Time: 0 minutes 00:14:59.924 Power Cycles: 0 00:14:59.924 Power On Hours: 0 hours 00:14:59.924 Unsafe Shutdowns: 0 00:14:59.924 Unrecoverable Media Errors: 0 00:14:59.924 Lifetime Error Log Entries: 0 00:14:59.924 Warning Temperature Time: 0 minutes 00:14:59.924 Critical Temperature Time: 0 minutes 00:14:59.924 00:14:59.924 Number of Queues 00:14:59.924 ================ 00:14:59.924 Number of I/O Submission Queues: 127 00:14:59.924 Number of I/O Completion Queues: 127 00:14:59.924 00:14:59.924 Active Namespaces 00:14:59.924 ================= 00:14:59.924 Namespace ID:1 00:14:59.924 Error Recovery Timeout: Unlimited 00:14:59.925 Command Set Identifier: NVM (00h) 00:14:59.925 Deallocate: Supported 00:14:59.925 Deallocated/Unwritten Error: Not Supported 00:14:59.925 Deallocated Read Value: Unknown 00:14:59.925 Deallocate in Write Zeroes: Not Supported 00:14:59.925 Deallocated Guard Field: 0xFFFF 00:14:59.925 Flush: Supported 00:14:59.925 Reservation: Supported 00:14:59.925 Namespace Sharing Capabilities: Multiple Controllers 00:14:59.925 Size (in LBAs): 131072 (0GiB) 00:14:59.925 Capacity (in LBAs): 131072 (0GiB) 00:14:59.925 Utilization (in LBAs): 131072 (0GiB) 00:14:59.925 NGUID: 3EA65E4B7C564B16B6F894ECF4F14205 00:14:59.925 UUID: 3ea65e4b-7c56-4b16-b6f8-94ecf4f14205 00:14:59.925 Thin Provisioning: Not Supported 00:14:59.925 Per-NS Atomic Units: Yes 00:14:59.925 Atomic Boundary Size (Normal): 0 00:14:59.925 Atomic Boundary Size (PFail): 0 00:14:59.925 Atomic Boundary Offset: 0 00:14:59.925 Maximum Single Source Range Length: 65535 00:14:59.925 Maximum Copy Length: 65535 00:14:59.925 Maximum Source Range Count: 1 00:14:59.925 NGUID/EUI64 Never Reused: No 00:14:59.925 Namespace Write Protected: No 00:14:59.925 Number of LBA Formats: 1 00:14:59.925 Current LBA Format: LBA Format #00 00:14:59.925 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:59.925 00:14:59.925 17:50:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:59.925 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.181 [2024-07-20 17:50:34.812534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:05.446 Initializing NVMe Controllers 00:15:05.446 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:05.446 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:05.446 Initialization complete. Launching workers. 00:15:05.446 ======================================================== 00:15:05.446 Latency(us) 00:15:05.446 Device Information : IOPS MiB/s Average min max 00:15:05.446 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35900.04 140.23 3564.72 1150.23 10525.06 00:15:05.446 ======================================================== 00:15:05.446 Total : 35900.04 140.23 3564.72 1150.23 10525.06 00:15:05.446 00:15:05.446 [2024-07-20 17:50:39.920139] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:05.446 17:50:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:05.446 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.446 [2024-07-20 17:50:40.152815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:10.705 Initializing NVMe Controllers 00:15:10.705 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:10.705 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:10.705 Initialization complete. Launching workers. 00:15:10.705 ======================================================== 00:15:10.705 Latency(us) 00:15:10.705 Device Information : IOPS MiB/s Average min max 00:15:10.705 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33442.90 130.64 3826.69 1190.06 8652.02 00:15:10.705 ======================================================== 00:15:10.705 Total : 33442.90 130.64 3826.69 1190.06 8652.02 00:15:10.705 00:15:10.705 [2024-07-20 17:50:45.174012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:10.705 17:50:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:10.705 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.705 [2024-07-20 17:50:45.384131] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:15.979 [2024-07-20 17:50:50.531926] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:15.979 Initializing NVMe Controllers 00:15:15.979 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:15.979 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:15.979 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:15.979 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:15.979 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:15.979 Initialization complete. Launching workers. 00:15:15.980 Starting thread on core 2 00:15:15.980 Starting thread on core 3 00:15:15.980 Starting thread on core 1 00:15:15.980 17:50:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:15.980 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.236 [2024-07-20 17:50:50.829290] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.517 [2024-07-20 17:50:53.911675] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.517 Initializing NVMe Controllers 00:15:19.517 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.517 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.517 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:19.517 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:19.517 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:19.517 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:19.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:19.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:19.517 Initialization complete. Launching workers. 00:15:19.517 Starting thread on core 1 with urgent priority queue 00:15:19.517 Starting thread on core 2 with urgent priority queue 00:15:19.517 Starting thread on core 3 with urgent priority queue 00:15:19.517 Starting thread on core 0 with urgent priority queue 00:15:19.517 SPDK bdev Controller (SPDK2 ) core 0: 4870.33 IO/s 20.53 secs/100000 ios 00:15:19.517 SPDK bdev Controller (SPDK2 ) core 1: 5319.00 IO/s 18.80 secs/100000 ios 00:15:19.517 SPDK bdev Controller (SPDK2 ) core 2: 6424.33 IO/s 15.57 secs/100000 ios 00:15:19.517 SPDK bdev Controller (SPDK2 ) core 3: 5904.00 IO/s 16.94 secs/100000 ios 00:15:19.517 ======================================================== 00:15:19.517 00:15:19.517 17:50:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:19.517 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.517 [2024-07-20 17:50:54.206382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.517 Initializing NVMe Controllers 00:15:19.517 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.517 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.517 Namespace ID: 1 size: 0GB 00:15:19.517 Initialization complete. 00:15:19.517 INFO: using host memory buffer for IO 00:15:19.517 Hello world! 00:15:19.517 [2024-07-20 17:50:54.215509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.517 17:50:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:19.517 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.773 [2024-07-20 17:50:54.511979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.142 Initializing NVMe Controllers 00:15:21.142 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.142 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.142 Initialization complete. Launching workers. 00:15:21.142 submit (in ns) avg, min, max = 7301.9, 3512.2, 4998785.6 00:15:21.142 complete (in ns) avg, min, max = 26576.5, 2061.1, 6993478.9 00:15:21.142 00:15:21.142 Submit histogram 00:15:21.142 ================ 00:15:21.142 Range in us Cumulative Count 00:15:21.142 3.508 - 3.532: 0.3740% ( 50) 00:15:21.142 3.532 - 3.556: 1.2342% ( 115) 00:15:21.142 3.556 - 3.579: 4.0242% ( 373) 00:15:21.142 3.579 - 3.603: 8.1981% ( 558) 00:15:21.142 3.603 - 3.627: 15.9548% ( 1037) 00:15:21.142 3.627 - 3.650: 25.0131% ( 1211) 00:15:21.142 3.650 - 3.674: 34.9764% ( 1332) 00:15:21.142 3.674 - 3.698: 42.1647% ( 961) 00:15:21.142 3.698 - 3.721: 48.4180% ( 836) 00:15:21.142 3.721 - 3.745: 52.9957% ( 612) 00:15:21.142 3.745 - 3.769: 56.9826% ( 533) 00:15:21.142 3.769 - 3.793: 61.1714% ( 560) 00:15:21.142 3.793 - 3.816: 64.6271% ( 462) 00:15:21.142 3.816 - 3.840: 68.2325% ( 482) 00:15:21.142 3.840 - 3.864: 72.3914% ( 556) 00:15:21.142 3.864 - 3.887: 76.7896% ( 588) 00:15:21.142 3.887 - 3.911: 80.6268% ( 513) 00:15:21.142 3.911 - 3.935: 84.0078% ( 452) 00:15:21.142 3.935 - 3.959: 86.1845% ( 291) 00:15:21.142 3.959 - 3.982: 87.9647% ( 238) 00:15:21.142 3.982 - 4.006: 89.7225% ( 235) 00:15:21.142 4.006 - 4.030: 90.9866% ( 169) 00:15:21.142 4.030 - 4.053: 92.2208% ( 165) 00:15:21.142 4.053 - 4.077: 93.1782% ( 128) 00:15:21.142 4.077 - 4.101: 94.1806% ( 134) 00:15:21.142 4.101 - 4.124: 94.9884% ( 108) 00:15:21.142 4.124 - 4.148: 95.6541% ( 89) 00:15:21.142 4.148 - 4.172: 96.0356% ( 51) 00:15:21.142 4.172 - 4.196: 96.3647% ( 44) 00:15:21.142 4.196 - 4.219: 96.6265% ( 35) 00:15:21.142 4.219 - 4.243: 96.8659% ( 32) 00:15:21.142 4.243 - 4.267: 97.0304% ( 22) 00:15:21.142 4.267 - 4.290: 97.2025% ( 23) 00:15:21.142 4.290 - 4.314: 97.2848% ( 11) 00:15:21.142 4.314 - 4.338: 97.3296% ( 6) 00:15:21.142 4.338 - 4.361: 97.4119% ( 11) 00:15:21.142 4.361 - 4.385: 97.4643% ( 7) 00:15:21.142 4.385 - 4.409: 97.4867% ( 3) 00:15:21.142 4.409 - 4.433: 97.5017% ( 2) 00:15:21.142 4.433 - 4.456: 97.5391% ( 5) 00:15:21.142 4.456 - 4.480: 97.5765% ( 5) 00:15:21.142 4.480 - 4.504: 97.5914% ( 2) 00:15:21.142 4.504 - 4.527: 97.5989% ( 1) 00:15:21.142 4.527 - 4.551: 97.6064% ( 1) 00:15:21.142 4.599 - 4.622: 97.6214% ( 2) 00:15:21.142 4.622 - 4.646: 97.6288% ( 1) 00:15:21.142 4.646 - 4.670: 97.6662% ( 5) 00:15:21.142 4.670 - 4.693: 97.6737% ( 1) 00:15:21.142 4.693 - 4.717: 97.6887% ( 2) 00:15:21.142 4.717 - 4.741: 97.7336% ( 6) 00:15:21.142 4.741 - 4.764: 97.7784% ( 6) 00:15:21.142 4.764 - 4.788: 97.8308% ( 7) 00:15:21.142 4.788 - 4.812: 97.8532% ( 3) 00:15:21.142 4.812 - 4.836: 97.8906% ( 5) 00:15:21.142 4.836 - 4.859: 97.9654% ( 10) 00:15:21.142 4.859 - 4.883: 98.0103% ( 6) 00:15:21.142 4.883 - 4.907: 98.0328% ( 3) 00:15:21.142 4.907 - 4.930: 98.0477% ( 2) 00:15:21.142 4.930 - 4.954: 98.0851% ( 5) 00:15:21.142 4.954 - 4.978: 98.1150% ( 4) 00:15:21.142 4.978 - 5.001: 98.1599% ( 6) 00:15:21.142 5.001 - 5.025: 98.1973% ( 5) 00:15:21.142 5.025 - 5.049: 98.2347% ( 5) 00:15:21.142 5.049 - 5.073: 98.2721% ( 5) 00:15:21.142 5.073 - 5.096: 98.2946% ( 3) 00:15:21.142 5.096 - 5.120: 98.3020% ( 1) 00:15:21.142 5.120 - 5.144: 98.3245% ( 3) 00:15:21.142 5.144 - 5.167: 98.3619% ( 5) 00:15:21.142 5.167 - 5.191: 98.3694% ( 1) 00:15:21.142 5.191 - 5.215: 98.3768% ( 1) 00:15:21.142 5.215 - 5.239: 98.3843% ( 1) 00:15:21.142 5.239 - 5.262: 98.3993% ( 2) 00:15:21.142 5.286 - 5.310: 98.4068% ( 1) 00:15:21.142 5.357 - 5.381: 98.4142% ( 1) 00:15:21.142 5.404 - 5.428: 98.4217% ( 1) 00:15:21.142 5.476 - 5.499: 98.4292% ( 1) 00:15:21.142 5.570 - 5.594: 98.4367% ( 1) 00:15:21.142 5.713 - 5.736: 98.4442% ( 1) 00:15:21.142 5.950 - 5.973: 98.4516% ( 1) 00:15:21.142 6.021 - 6.044: 98.4591% ( 1) 00:15:21.142 6.400 - 6.447: 98.4666% ( 1) 00:15:21.142 6.447 - 6.495: 98.4741% ( 1) 00:15:21.142 6.684 - 6.732: 98.4965% ( 3) 00:15:21.142 6.827 - 6.874: 98.5040% ( 1) 00:15:21.143 6.874 - 6.921: 98.5115% ( 1) 00:15:21.143 7.016 - 7.064: 98.5190% ( 1) 00:15:21.143 7.064 - 7.111: 98.5339% ( 2) 00:15:21.143 7.111 - 7.159: 98.5489% ( 2) 00:15:21.143 7.253 - 7.301: 98.5638% ( 2) 00:15:21.143 7.301 - 7.348: 98.5713% ( 1) 00:15:21.143 7.348 - 7.396: 98.5788% ( 1) 00:15:21.143 7.396 - 7.443: 98.6012% ( 3) 00:15:21.143 7.443 - 7.490: 98.6087% ( 1) 00:15:21.143 7.490 - 7.538: 98.6162% ( 1) 00:15:21.143 7.538 - 7.585: 98.6237% ( 1) 00:15:21.143 7.585 - 7.633: 98.6536% ( 4) 00:15:21.143 7.633 - 7.680: 98.6611% ( 1) 00:15:21.143 7.727 - 7.775: 98.6760% ( 2) 00:15:21.143 7.775 - 7.822: 98.6910% ( 2) 00:15:21.143 7.822 - 7.870: 98.6985% ( 1) 00:15:21.143 7.917 - 7.964: 98.7060% ( 1) 00:15:21.143 7.964 - 8.012: 98.7134% ( 1) 00:15:21.143 8.012 - 8.059: 98.7284% ( 2) 00:15:21.143 8.249 - 8.296: 98.7359% ( 1) 00:15:21.143 8.296 - 8.344: 98.7583% ( 3) 00:15:21.143 8.344 - 8.391: 98.7733% ( 2) 00:15:21.143 8.439 - 8.486: 98.7882% ( 2) 00:15:21.143 8.486 - 8.533: 98.7957% ( 1) 00:15:21.143 8.581 - 8.628: 98.8032% ( 1) 00:15:21.143 8.628 - 8.676: 98.8107% ( 1) 00:15:21.143 8.818 - 8.865: 98.8182% ( 1) 00:15:21.143 8.960 - 9.007: 98.8256% ( 1) 00:15:21.143 9.007 - 9.055: 98.8331% ( 1) 00:15:21.143 9.055 - 9.102: 98.8481% ( 2) 00:15:21.143 9.150 - 9.197: 98.8556% ( 1) 00:15:21.143 9.197 - 9.244: 98.8630% ( 1) 00:15:21.143 9.481 - 9.529: 98.8705% ( 1) 00:15:21.143 9.576 - 9.624: 98.8855% ( 2) 00:15:21.143 9.861 - 9.908: 98.8930% ( 1) 00:15:21.143 10.003 - 10.050: 98.9004% ( 1) 00:15:21.143 10.193 - 10.240: 98.9079% ( 1) 00:15:21.143 10.572 - 10.619: 98.9154% ( 1) 00:15:21.143 10.619 - 10.667: 98.9229% ( 1) 00:15:21.143 10.714 - 10.761: 98.9304% ( 1) 00:15:21.143 10.809 - 10.856: 98.9453% ( 2) 00:15:21.143 11.378 - 11.425: 98.9528% ( 1) 00:15:21.143 11.520 - 11.567: 98.9603% ( 1) 00:15:21.143 11.567 - 11.615: 98.9678% ( 1) 00:15:21.143 11.947 - 11.994: 98.9752% ( 1) 00:15:21.143 12.089 - 12.136: 98.9827% ( 1) 00:15:21.143 12.610 - 12.705: 98.9902% ( 1) 00:15:21.143 13.464 - 13.559: 98.9977% ( 1) 00:15:21.143 13.653 - 13.748: 99.0052% ( 1) 00:15:21.143 13.748 - 13.843: 99.0126% ( 1) 00:15:21.143 13.938 - 14.033: 99.0201% ( 1) 00:15:21.143 17.256 - 17.351: 99.0276% ( 1) 00:15:21.143 17.351 - 17.446: 99.0650% ( 5) 00:15:21.143 17.446 - 17.541: 99.0800% ( 2) 00:15:21.143 17.541 - 17.636: 99.1174% ( 5) 00:15:21.143 17.636 - 17.730: 99.1473% ( 4) 00:15:21.143 17.730 - 17.825: 99.1996% ( 7) 00:15:21.143 17.825 - 17.920: 99.2146% ( 2) 00:15:21.143 17.920 - 18.015: 99.2744% ( 8) 00:15:21.143 18.015 - 18.110: 99.3193% ( 6) 00:15:21.143 18.110 - 18.204: 99.4091% ( 12) 00:15:21.143 18.204 - 18.299: 99.4914% ( 11) 00:15:21.143 18.299 - 18.394: 99.6185% ( 17) 00:15:21.143 18.394 - 18.489: 99.6484% ( 4) 00:15:21.143 18.489 - 18.584: 99.6933% ( 6) 00:15:21.143 18.584 - 18.679: 99.7457% ( 7) 00:15:21.143 18.679 - 18.773: 99.7831% ( 5) 00:15:21.143 18.773 - 18.868: 99.7980% ( 2) 00:15:21.143 18.963 - 19.058: 99.8130% ( 2) 00:15:21.143 19.058 - 19.153: 99.8280% ( 2) 00:15:21.143 19.153 - 19.247: 99.8354% ( 1) 00:15:21.143 19.247 - 19.342: 99.8429% ( 1) 00:15:21.143 19.342 - 19.437: 99.8579% ( 2) 00:15:21.143 19.437 - 19.532: 99.8654% ( 1) 00:15:21.143 19.532 - 19.627: 99.8803% ( 2) 00:15:21.143 19.627 - 19.721: 99.8878% ( 1) 00:15:21.143 20.859 - 20.954: 99.8953% ( 1) 00:15:21.143 21.239 - 21.333: 99.9028% ( 1) 00:15:21.143 22.661 - 22.756: 99.9102% ( 1) 00:15:21.143 23.514 - 23.609: 99.9177% ( 1) 00:15:21.143 3980.705 - 4004.978: 99.9850% ( 9) 00:15:21.143 4004.978 - 4029.250: 99.9925% ( 1) 00:15:21.143 4975.881 - 5000.154: 100.0000% ( 1) 00:15:21.143 00:15:21.143 Complete histogram 00:15:21.143 ================== 00:15:21.143 Range in us Cumulative Count 00:15:21.143 2.050 - 2.062: 0.0075% ( 1) 00:15:21.143 2.062 - 2.074: 11.4668% ( 1532) 00:15:21.143 2.074 - 2.086: 33.8170% ( 2988) 00:15:21.143 2.086 - 2.098: 36.1284% ( 309) 00:15:21.143 2.098 - 2.110: 48.0963% ( 1600) 00:15:21.143 2.110 - 2.121: 55.7035% ( 1017) 00:15:21.143 2.121 - 2.133: 57.1097% ( 188) 00:15:21.143 2.133 - 2.145: 65.9286% ( 1179) 00:15:21.143 2.145 - 2.157: 71.5012% ( 745) 00:15:21.143 2.157 - 2.169: 72.7654% ( 169) 00:15:21.143 2.169 - 2.181: 77.3805% ( 617) 00:15:21.143 2.181 - 2.193: 79.9536% ( 344) 00:15:21.143 2.193 - 2.204: 80.6418% ( 92) 00:15:21.143 2.204 - 2.216: 84.3593% ( 497) 00:15:21.143 2.216 - 2.228: 87.1643% ( 375) 00:15:21.143 2.228 - 2.240: 89.1914% ( 271) 00:15:21.143 2.240 - 2.252: 92.1909% ( 401) 00:15:21.143 2.252 - 2.264: 93.5822% ( 186) 00:15:21.143 2.264 - 2.276: 93.8664% ( 38) 00:15:21.143 2.276 - 2.287: 94.1806% ( 42) 00:15:21.143 2.287 - 2.299: 94.4274% ( 33) 00:15:21.143 2.299 - 2.311: 94.9809% ( 74) 00:15:21.143 2.311 - 2.323: 95.6317% ( 87) 00:15:21.143 2.323 - 2.335: 95.7813% ( 20) 00:15:21.143 2.335 - 2.347: 95.8411% ( 8) 00:15:21.143 2.347 - 2.359: 95.9907% ( 20) 00:15:21.143 2.359 - 2.370: 96.2675% ( 37) 00:15:21.143 2.370 - 2.382: 96.5742% ( 41) 00:15:21.143 2.382 - 2.394: 97.0304% ( 61) 00:15:21.143 2.394 - 2.406: 97.4344% ( 54) 00:15:21.143 2.406 - 2.418: 97.5466% ( 15) 00:15:21.143 2.418 - 2.430: 97.6662% ( 16) 00:15:21.143 2.430 - 2.441: 97.7710% ( 14) 00:15:21.143 2.441 - 2.453: 97.9206% ( 20) 00:15:21.143 2.453 - 2.465: 97.9879% ( 9) 00:15:21.143 2.465 - 2.477: 98.1674% ( 24) 00:15:21.143 2.477 - 2.489: 98.2497% ( 11) 00:15:21.143 2.489 - 2.501: 98.2721% ( 3) 00:15:21.143 2.501 - 2.513: 98.3020% ( 4) 00:15:21.143 2.513 - 2.524: 98.3469% ( 6) 00:15:21.143 2.524 - 2.536: 98.4068% ( 8) 00:15:21.143 2.536 - 2.548: 98.4142% ( 1) 00:15:21.143 2.548 - 2.560: 98.4217% ( 1) 00:15:21.143 2.560 - 2.572: 98.4516% ( 4) 00:15:21.143 2.572 - 2.584: 98.4666% ( 2) 00:15:21.143 2.596 - 2.607: 98.4741% ( 1) 00:15:21.143 2.643 - 2.655: 98.4890% ( 2) 00:15:21.143 2.655 - 2.667: 98.4965% ( 1) 00:15:21.143 2.702 - 2.714: 98.5040% ( 1) 00:15:21.143 2.726 - 2.738: 98.5115% ( 1) 00:15:21.143 2.738 - 2.750: 98.5190% ( 1) 00:15:21.143 3.413 - 3.437: 98.5264% ( 1) 00:15:21.143 3.437 - 3.461: 98.5414% ( 2) 00:15:21.143 3.461 - 3.484: 98.5489% ( 1) 00:15:21.143 3.484 - 3.508: 98.5713% ( 3) 00:15:21.143 3.556 - 3.579: 98.5938% ( 3) 00:15:21.143 3.603 - 3.627: 98.6012% ( 1) 00:15:21.143 3.627 - 3.650: 98.6237% ( 3) 00:15:21.143 3.650 - 3.674: 98.6312% ( 1) 00:15:21.143 3.698 - 3.721: 98.6461% ( 2) 00:15:21.143 3.721 - 3.745: 98.6686% ( 3) 00:15:21.143 3.745 - 3.769: 98.6760% ( 1) 00:15:21.143 3.769 - 3.793: 98.7060% ( 4) 00:15:21.143 3.911 - 3.935: 98.7134% ( 1) 00:15:21.143 4.172 - 4.196: 98.7209% ( 1) 00:15:21.143 5.167 - 5.191: 98.7284% ( 1) 00:15:21.143 5.262 - 5.286: 98.7359% ( 1) 00:15:21.143 5.310 - 5.333: 98.7434% ( 1) 00:15:21.143 5.333 - 5.357: 98.7508% ( 1) 00:15:21.143 5.428 - 5.452: 98.7583% ( 1) 00:15:21.143 5.641 - 5.665: 98.7658% ( 1) 00:15:21.143 5.760 - 5.784: 98.7733% ( 1) 00:15:21.143 5.973 - 5.997: 98.7808% ( 1) 00:15:21.143 6.116 - 6.163: 98.7957% ( 2) 00:15:21.143 6.163 - 6.210: 98.8032% ( 1) 00:15:21.143 6.210 - 6.258: 98.8182% ( 2) 00:15:21.143 6.258 - 6.305: 98.8256% ( 1) 00:15:21.143 6.305 - 6.353: 98.8331% ( 1) 00:15:21.143 6.353 - 6.400: 98.8406% ( 1) 00:15:21.143 6.542 - 6.590: 98.8556% ( 2) 00:15:21.143 7.064 - 7.111: 98.8630% ( 1) 00:15:21.143 7.159 - 7.206: 98.8705% ( 1) 00:15:21.143 7.253 - 7.301: 98.8780% ( 1) 00:15:21.143 15.550 - 15.644: 98.8855% ( 1) 00:15:21.143 15.644 - 15.739: 9[2024-07-20 17:50:55.605486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.143 8.9004% ( 2) 00:15:21.143 15.739 - 15.834: 98.9229% ( 3) 00:15:21.143 15.834 - 15.929: 98.9453% ( 3) 00:15:21.143 15.929 - 16.024: 98.9752% ( 4) 00:15:21.143 16.024 - 16.119: 98.9977% ( 3) 00:15:21.143 16.119 - 16.213: 99.0351% ( 5) 00:15:21.143 16.213 - 16.308: 99.0650% ( 4) 00:15:21.143 16.308 - 16.403: 99.0800% ( 2) 00:15:21.143 16.403 - 16.498: 99.1323% ( 7) 00:15:21.143 16.498 - 16.593: 99.1772% ( 6) 00:15:21.143 16.593 - 16.687: 99.2146% ( 5) 00:15:21.143 16.687 - 16.782: 99.2296% ( 2) 00:15:21.143 16.782 - 16.877: 99.2595% ( 4) 00:15:21.143 16.877 - 16.972: 99.2819% ( 3) 00:15:21.143 16.972 - 17.067: 99.3044% ( 3) 00:15:21.143 17.067 - 17.161: 99.3193% ( 2) 00:15:21.143 17.161 - 17.256: 99.3343% ( 2) 00:15:21.143 17.256 - 17.351: 99.3418% ( 1) 00:15:21.143 17.351 - 17.446: 99.3492% ( 1) 00:15:21.143 17.541 - 17.636: 99.3567% ( 1) 00:15:21.143 17.825 - 17.920: 99.3642% ( 1) 00:15:21.143 17.920 - 18.015: 99.3717% ( 1) 00:15:21.143 18.204 - 18.299: 99.3866% ( 2) 00:15:21.143 603.781 - 606.815: 99.3941% ( 1) 00:15:21.143 3009.801 - 3021.938: 99.4016% ( 1) 00:15:21.143 3021.938 - 3034.074: 99.4091% ( 1) 00:15:21.143 3980.705 - 4004.978: 99.8953% ( 65) 00:15:21.143 4004.978 - 4029.250: 99.9925% ( 13) 00:15:21.143 6990.507 - 7039.052: 100.0000% ( 1) 00:15:21.143 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.143 [ 00:15:21.143 { 00:15:21.143 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.143 "subtype": "Discovery", 00:15:21.143 "listen_addresses": [], 00:15:21.143 "allow_any_host": true, 00:15:21.143 "hosts": [] 00:15:21.143 }, 00:15:21.143 { 00:15:21.143 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.143 "subtype": "NVMe", 00:15:21.143 "listen_addresses": [ 00:15:21.143 { 00:15:21.143 "trtype": "VFIOUSER", 00:15:21.143 "adrfam": "IPv4", 00:15:21.143 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.143 "trsvcid": "0" 00:15:21.143 } 00:15:21.143 ], 00:15:21.143 "allow_any_host": true, 00:15:21.143 "hosts": [], 00:15:21.143 "serial_number": "SPDK1", 00:15:21.143 "model_number": "SPDK bdev Controller", 00:15:21.143 "max_namespaces": 32, 00:15:21.143 "min_cntlid": 1, 00:15:21.143 "max_cntlid": 65519, 00:15:21.143 "namespaces": [ 00:15:21.143 { 00:15:21.143 "nsid": 1, 00:15:21.143 "bdev_name": "Malloc1", 00:15:21.143 "name": "Malloc1", 00:15:21.143 "nguid": "BAB240E860C846DA88E41C5AED41472A", 00:15:21.143 "uuid": "bab240e8-60c8-46da-88e4-1c5aed41472a" 00:15:21.143 }, 00:15:21.143 { 00:15:21.143 "nsid": 2, 00:15:21.143 "bdev_name": "Malloc3", 00:15:21.143 "name": "Malloc3", 00:15:21.143 "nguid": "212A6EC9A0E3484DABEAC3F07C922CE2", 00:15:21.143 "uuid": "212a6ec9-a0e3-484d-abea-c3f07c922ce2" 00:15:21.143 } 00:15:21.143 ] 00:15:21.143 }, 00:15:21.143 { 00:15:21.143 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.143 "subtype": "NVMe", 00:15:21.143 "listen_addresses": [ 00:15:21.143 { 00:15:21.143 "trtype": "VFIOUSER", 00:15:21.143 "adrfam": "IPv4", 00:15:21.143 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.143 "trsvcid": "0" 00:15:21.143 } 00:15:21.143 ], 00:15:21.143 "allow_any_host": true, 00:15:21.143 "hosts": [], 00:15:21.143 "serial_number": "SPDK2", 00:15:21.143 "model_number": "SPDK bdev Controller", 00:15:21.143 "max_namespaces": 32, 00:15:21.143 "min_cntlid": 1, 00:15:21.143 "max_cntlid": 65519, 00:15:21.143 "namespaces": [ 00:15:21.143 { 00:15:21.143 "nsid": 1, 00:15:21.143 "bdev_name": "Malloc2", 00:15:21.143 "name": "Malloc2", 00:15:21.143 "nguid": "3EA65E4B7C564B16B6F894ECF4F14205", 00:15:21.143 "uuid": "3ea65e4b-7c56-4b16-b6f8-94ecf4f14205" 00:15:21.143 } 00:15:21.143 ] 00:15:21.143 } 00:15:21.143 ] 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=913139 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:21.143 17:50:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:21.401 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.401 [2024-07-20 17:50:56.059250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.401 Malloc4 00:15:21.401 17:50:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:21.658 [2024-07-20 17:50:56.423765] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.658 17:50:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.915 Asynchronous Event Request test 00:15:21.915 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.915 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.915 Registering asynchronous event callbacks... 00:15:21.915 Starting namespace attribute notice tests for all controllers... 00:15:21.915 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:21.915 aer_cb - Changed Namespace 00:15:21.915 Cleaning up... 00:15:21.915 [ 00:15:21.915 { 00:15:21.915 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.915 "subtype": "Discovery", 00:15:21.915 "listen_addresses": [], 00:15:21.915 "allow_any_host": true, 00:15:21.915 "hosts": [] 00:15:21.915 }, 00:15:21.915 { 00:15:21.915 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.915 "subtype": "NVMe", 00:15:21.915 "listen_addresses": [ 00:15:21.915 { 00:15:21.915 "trtype": "VFIOUSER", 00:15:21.915 "adrfam": "IPv4", 00:15:21.915 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.915 "trsvcid": "0" 00:15:21.915 } 00:15:21.915 ], 00:15:21.915 "allow_any_host": true, 00:15:21.915 "hosts": [], 00:15:21.915 "serial_number": "SPDK1", 00:15:21.915 "model_number": "SPDK bdev Controller", 00:15:21.915 "max_namespaces": 32, 00:15:21.915 "min_cntlid": 1, 00:15:21.915 "max_cntlid": 65519, 00:15:21.915 "namespaces": [ 00:15:21.915 { 00:15:21.915 "nsid": 1, 00:15:21.915 "bdev_name": "Malloc1", 00:15:21.915 "name": "Malloc1", 00:15:21.915 "nguid": "BAB240E860C846DA88E41C5AED41472A", 00:15:21.915 "uuid": "bab240e8-60c8-46da-88e4-1c5aed41472a" 00:15:21.915 }, 00:15:21.915 { 00:15:21.915 "nsid": 2, 00:15:21.915 "bdev_name": "Malloc3", 00:15:21.915 "name": "Malloc3", 00:15:21.915 "nguid": "212A6EC9A0E3484DABEAC3F07C922CE2", 00:15:21.915 "uuid": "212a6ec9-a0e3-484d-abea-c3f07c922ce2" 00:15:21.915 } 00:15:21.915 ] 00:15:21.915 }, 00:15:21.915 { 00:15:21.915 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.915 "subtype": "NVMe", 00:15:21.915 "listen_addresses": [ 00:15:21.915 { 00:15:21.915 "trtype": "VFIOUSER", 00:15:21.915 "adrfam": "IPv4", 00:15:21.915 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.915 "trsvcid": "0" 00:15:21.915 } 00:15:21.915 ], 00:15:21.915 "allow_any_host": true, 00:15:21.915 "hosts": [], 00:15:21.915 "serial_number": "SPDK2", 00:15:21.915 "model_number": "SPDK bdev Controller", 00:15:21.915 "max_namespaces": 32, 00:15:21.915 "min_cntlid": 1, 00:15:21.915 "max_cntlid": 65519, 00:15:21.915 "namespaces": [ 00:15:21.915 { 00:15:21.915 "nsid": 1, 00:15:21.915 "bdev_name": "Malloc2", 00:15:21.915 "name": "Malloc2", 00:15:21.915 "nguid": "3EA65E4B7C564B16B6F894ECF4F14205", 00:15:21.915 "uuid": "3ea65e4b-7c56-4b16-b6f8-94ecf4f14205" 00:15:21.915 }, 00:15:21.915 { 00:15:21.915 "nsid": 2, 00:15:21.915 "bdev_name": "Malloc4", 00:15:21.915 "name": "Malloc4", 00:15:21.915 "nguid": "98D278F0226E405780D4194A6B8B5ADB", 00:15:21.915 "uuid": "98d278f0-226e-4057-80d4-194a6b8b5adb" 00:15:21.915 } 00:15:21.915 ] 00:15:21.915 } 00:15:21.915 ] 00:15:21.915 17:50:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 913139 00:15:21.915 17:50:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:21.915 17:50:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 907654 00:15:21.915 17:50:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 907654 ']' 00:15:21.915 17:50:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 907654 00:15:21.915 17:50:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:21.915 17:50:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:21.915 17:50:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 907654 00:15:21.915 17:50:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:21.915 17:50:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:21.915 17:50:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 907654' 00:15:21.915 killing process with pid 907654 00:15:21.915 17:50:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 907654 00:15:21.915 17:50:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 907654 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=913281 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 913281' 00:15:22.481 Process pid: 913281 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 913281 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 913281 ']' 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:22.481 17:50:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:22.481 [2024-07-20 17:50:57.081296] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:22.481 [2024-07-20 17:50:57.082404] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:22.481 [2024-07-20 17:50:57.082478] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.481 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.481 [2024-07-20 17:50:57.146120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.481 [2024-07-20 17:50:57.235462] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.481 [2024-07-20 17:50:57.235534] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.481 [2024-07-20 17:50:57.235561] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.481 [2024-07-20 17:50:57.235576] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.481 [2024-07-20 17:50:57.235588] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.481 [2024-07-20 17:50:57.235652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.481 [2024-07-20 17:50:57.235709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.481 [2024-07-20 17:50:57.236507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.481 [2024-07-20 17:50:57.236512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.738 [2024-07-20 17:50:57.344543] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:22.738 [2024-07-20 17:50:57.344730] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:22.738 [2024-07-20 17:50:57.345099] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:22.738 [2024-07-20 17:50:57.345647] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:22.738 [2024-07-20 17:50:57.345903] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:22.739 17:50:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:22.739 17:50:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:22.739 17:50:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:23.670 17:50:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:23.929 17:50:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:23.929 17:50:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:23.929 17:50:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:23.929 17:50:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:23.929 17:50:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:24.187 Malloc1 00:15:24.445 17:50:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:24.703 17:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:24.960 17:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:24.960 17:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:24.960 17:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:24.960 17:50:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:25.218 Malloc2 00:15:25.218 17:51:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:25.475 17:51:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:25.732 17:51:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:25.989 17:51:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:25.989 17:51:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 913281 00:15:25.989 17:51:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 913281 ']' 00:15:25.989 17:51:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 913281 00:15:26.247 17:51:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:26.247 17:51:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:26.247 17:51:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 913281 00:15:26.247 17:51:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:26.247 17:51:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:26.247 17:51:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 913281' 00:15:26.247 killing process with pid 913281 00:15:26.247 17:51:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 913281 00:15:26.247 17:51:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 913281 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:26.506 00:15:26.506 real 0m52.473s 00:15:26.506 user 3m26.970s 00:15:26.506 sys 0m4.408s 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:26.506 ************************************ 00:15:26.506 END TEST nvmf_vfio_user 00:15:26.506 ************************************ 00:15:26.506 17:51:01 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:26.506 17:51:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:26.506 17:51:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:26.506 17:51:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:26.506 ************************************ 00:15:26.506 START TEST nvmf_vfio_user_nvme_compliance 00:15:26.506 ************************************ 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:26.506 * Looking for test storage... 00:15:26.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=913950 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:26.506 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 913950' 00:15:26.507 Process pid: 913950 00:15:26.507 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:26.507 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 913950 00:15:26.507 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 913950 ']' 00:15:26.507 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.507 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:26.507 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.507 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:26.507 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.507 [2024-07-20 17:51:01.235506] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:26.507 [2024-07-20 17:51:01.235587] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.507 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.764 [2024-07-20 17:51:01.301897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:26.764 [2024-07-20 17:51:01.394998] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.764 [2024-07-20 17:51:01.395069] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.764 [2024-07-20 17:51:01.395086] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.764 [2024-07-20 17:51:01.395099] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.764 [2024-07-20 17:51:01.395111] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.764 [2024-07-20 17:51:01.395172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.764 [2024-07-20 17:51:01.395206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.764 [2024-07-20 17:51:01.395210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.764 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:26.764 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:26.764 17:51:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:28.136 malloc0 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.136 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:28.137 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.137 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:28.137 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.137 17:51:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:28.137 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.137 00:15:28.137 00:15:28.137 CUnit - A unit testing framework for C - Version 2.1-3 00:15:28.137 http://cunit.sourceforge.net/ 00:15:28.137 00:15:28.137 00:15:28.137 Suite: nvme_compliance 00:15:28.137 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-20 17:51:02.739320] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.137 [2024-07-20 17:51:02.740738] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:28.137 [2024-07-20 17:51:02.740763] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:28.137 [2024-07-20 17:51:02.740798] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:28.137 [2024-07-20 17:51:02.742337] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.137 passed 00:15:28.137 Test: admin_identify_ctrlr_verify_fused ...[2024-07-20 17:51:02.828970] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.137 [2024-07-20 17:51:02.831998] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.137 passed 00:15:28.137 Test: admin_identify_ns ...[2024-07-20 17:51:02.920631] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.393 [2024-07-20 17:51:02.978815] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:28.393 [2024-07-20 17:51:02.986827] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:28.393 [2024-07-20 17:51:03.007942] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.393 passed 00:15:28.393 Test: admin_get_features_mandatory_features ...[2024-07-20 17:51:03.095734] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.393 [2024-07-20 17:51:03.098758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.393 passed 00:15:28.393 Test: admin_get_features_optional_features ...[2024-07-20 17:51:03.182383] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.393 [2024-07-20 17:51:03.186410] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.649 passed 00:15:28.649 Test: admin_set_features_number_of_queues ...[2024-07-20 17:51:03.272661] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.649 [2024-07-20 17:51:03.377939] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.649 passed 00:15:28.906 Test: admin_get_log_page_mandatory_logs ...[2024-07-20 17:51:03.461409] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.906 [2024-07-20 17:51:03.464438] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.906 passed 00:15:28.906 Test: admin_get_log_page_with_lpo ...[2024-07-20 17:51:03.550624] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.906 [2024-07-20 17:51:03.617807] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:28.906 [2024-07-20 17:51:03.630882] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.906 passed 00:15:29.162 Test: fabric_property_get ...[2024-07-20 17:51:03.715997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.162 [2024-07-20 17:51:03.717275] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:29.162 [2024-07-20 17:51:03.719020] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.162 passed 00:15:29.162 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-20 17:51:03.806627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.162 [2024-07-20 17:51:03.807932] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:29.162 [2024-07-20 17:51:03.809644] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.162 passed 00:15:29.162 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-20 17:51:03.892166] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.419 [2024-07-20 17:51:03.970823] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:29.419 [2024-07-20 17:51:03.986822] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:29.419 [2024-07-20 17:51:03.991900] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.419 passed 00:15:29.419 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-20 17:51:04.077349] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.419 [2024-07-20 17:51:04.078658] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:29.419 [2024-07-20 17:51:04.080370] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.419 passed 00:15:29.419 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-20 17:51:04.164935] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.676 [2024-07-20 17:51:04.240834] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:29.676 [2024-07-20 17:51:04.264818] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:29.676 [2024-07-20 17:51:04.269903] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.676 passed 00:15:29.676 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-20 17:51:04.355671] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.676 [2024-07-20 17:51:04.356994] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:29.676 [2024-07-20 17:51:04.357033] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:29.676 [2024-07-20 17:51:04.358694] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.676 passed 00:15:29.676 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-20 17:51:04.445268] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.933 [2024-07-20 17:51:04.535801] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:29.933 [2024-07-20 17:51:04.543808] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:29.933 [2024-07-20 17:51:04.551805] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:29.933 [2024-07-20 17:51:04.556807] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:29.933 [2024-07-20 17:51:04.588936] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.933 passed 00:15:29.933 Test: admin_create_io_sq_verify_pc ...[2024-07-20 17:51:04.671143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.933 [2024-07-20 17:51:04.687818] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:29.933 [2024-07-20 17:51:04.705528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.189 passed 00:15:30.189 Test: admin_create_io_qp_max_qps ...[2024-07-20 17:51:04.792145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.135 [2024-07-20 17:51:05.883823] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:31.698 [2024-07-20 17:51:06.260671] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.698 passed 00:15:31.698 Test: admin_create_io_sq_shared_cq ...[2024-07-20 17:51:06.349400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.698 [2024-07-20 17:51:06.480802] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:31.954 [2024-07-20 17:51:06.517890] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.954 passed 00:15:31.954 00:15:31.954 Run Summary: Type Total Ran Passed Failed Inactive 00:15:31.954 suites 1 1 n/a 0 0 00:15:31.954 tests 18 18 18 0 0 00:15:31.954 asserts 360 360 360 0 n/a 00:15:31.954 00:15:31.954 Elapsed time = 1.566 seconds 00:15:31.954 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 913950 00:15:31.954 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 913950 ']' 00:15:31.954 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 913950 00:15:31.954 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:31.954 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:31.954 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 913950 00:15:31.954 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:31.954 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:31.954 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 913950' 00:15:31.954 killing process with pid 913950 00:15:31.954 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 913950 00:15:31.954 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 913950 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:32.213 00:15:32.213 real 0m5.708s 00:15:32.213 user 0m16.103s 00:15:32.213 sys 0m0.564s 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:32.213 ************************************ 00:15:32.213 END TEST nvmf_vfio_user_nvme_compliance 00:15:32.213 ************************************ 00:15:32.213 17:51:06 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:32.213 17:51:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:32.213 17:51:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:32.213 17:51:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:32.213 ************************************ 00:15:32.213 START TEST nvmf_vfio_user_fuzz 00:15:32.213 ************************************ 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:32.213 * Looking for test storage... 00:15:32.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=914706 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 914706' 00:15:32.213 Process pid: 914706 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 914706 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 914706 ']' 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:32.213 17:51:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.471 17:51:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:32.471 17:51:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:32.471 17:51:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:33.841 malloc0 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:33.841 17:51:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:05.951 Fuzzing completed. Shutting down the fuzz application 00:16:05.951 00:16:05.952 Dumping successful admin opcodes: 00:16:05.952 8, 9, 10, 24, 00:16:05.952 Dumping successful io opcodes: 00:16:05.952 0, 00:16:05.952 NS: 0x200003a1ef00 I/O qp, Total commands completed: 596394, total successful commands: 2303, random_seed: 1818495168 00:16:05.952 NS: 0x200003a1ef00 admin qp, Total commands completed: 81684, total successful commands: 653, random_seed: 696951488 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 914706 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 914706 ']' 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 914706 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 914706 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 914706' 00:16:05.952 killing process with pid 914706 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 914706 00:16:05.952 17:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 914706 00:16:05.952 17:51:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:05.952 17:51:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:05.952 00:16:05.952 real 0m32.195s 00:16:05.952 user 0m31.522s 00:16:05.952 sys 0m29.764s 00:16:05.952 17:51:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:05.952 17:51:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:05.952 ************************************ 00:16:05.952 END TEST nvmf_vfio_user_fuzz 00:16:05.952 ************************************ 00:16:05.952 17:51:39 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:05.952 17:51:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:05.952 17:51:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:05.952 17:51:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:05.952 ************************************ 00:16:05.952 START TEST nvmf_host_management 00:16:05.952 ************************************ 00:16:05.952 17:51:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:05.952 * Looking for test storage... 00:16:05.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.952 17:51:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:05.953 17:51:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:05.954 17:51:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.211 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:06.211 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.211 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.211 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.211 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.211 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.211 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.211 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.211 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.212 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:06.470 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:06.470 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:06.470 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:06.470 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:16:06.470 00:16:06.470 --- 10.0.0.2 ping statistics --- 00:16:06.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.470 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:16:06.470 00:16:06.470 --- 10.0.0.1 ping statistics --- 00:16:06.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.470 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=920647 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 920647 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 920647 ']' 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:06.470 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.470 [2024-07-20 17:51:41.221303] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:06.470 [2024-07-20 17:51:41.221384] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.470 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.728 [2024-07-20 17:51:41.292116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:06.728 [2024-07-20 17:51:41.390206] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.728 [2024-07-20 17:51:41.390270] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.728 [2024-07-20 17:51:41.390287] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.728 [2024-07-20 17:51:41.390301] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.728 [2024-07-20 17:51:41.390313] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.728 [2024-07-20 17:51:41.390416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.728 [2024-07-20 17:51:41.390440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.728 [2024-07-20 17:51:41.390514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:06.728 [2024-07-20 17:51:41.390517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.728 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:06.728 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:06.728 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:06.728 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:06.728 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.986 [2024-07-20 17:51:41.536555] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.986 Malloc0 00:16:06.986 [2024-07-20 17:51:41.601694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=920694 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 920694 /var/tmp/bdevperf.sock 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 920694 ']' 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:06.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:06.986 { 00:16:06.986 "params": { 00:16:06.986 "name": "Nvme$subsystem", 00:16:06.986 "trtype": "$TEST_TRANSPORT", 00:16:06.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:06.986 "adrfam": "ipv4", 00:16:06.986 "trsvcid": "$NVMF_PORT", 00:16:06.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:06.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:06.986 "hdgst": ${hdgst:-false}, 00:16:06.986 "ddgst": ${ddgst:-false} 00:16:06.986 }, 00:16:06.986 "method": "bdev_nvme_attach_controller" 00:16:06.986 } 00:16:06.986 EOF 00:16:06.986 )") 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:06.986 17:51:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:06.986 "params": { 00:16:06.986 "name": "Nvme0", 00:16:06.986 "trtype": "tcp", 00:16:06.986 "traddr": "10.0.0.2", 00:16:06.986 "adrfam": "ipv4", 00:16:06.986 "trsvcid": "4420", 00:16:06.986 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:06.986 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:06.986 "hdgst": false, 00:16:06.986 "ddgst": false 00:16:06.986 }, 00:16:06.986 "method": "bdev_nvme_attach_controller" 00:16:06.986 }' 00:16:06.986 [2024-07-20 17:51:41.681850] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:06.986 [2024-07-20 17:51:41.681927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid920694 ] 00:16:06.986 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.986 [2024-07-20 17:51:41.743946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.244 [2024-07-20 17:51:41.831556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.244 Running I/O for 10 seconds... 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:16:07.501 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=259 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 259 -ge 100 ']' 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.759 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.759 [2024-07-20 17:51:42.388201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b980 is same with the state(5) to be set 00:16:07.759 [2024-07-20 17:51:42.388433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.759 [2024-07-20 17:51:42.388476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.388506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.759 [2024-07-20 17:51:42.388521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.388535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.759 [2024-07-20 17:51:42.388548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.388562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.759 [2024-07-20 17:51:42.388576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.388589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbf00 is same with the state(5) to be set 00:16:07.759 [2024-07-20 17:51:42.389126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.759 [2024-07-20 17:51:42.389151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.389185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.759 [2024-07-20 17:51:42.389201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.389218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.759 [2024-07-20 17:51:42.389233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.389250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.759 [2024-07-20 17:51:42.389264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.389279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.759 [2024-07-20 17:51:42.389293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.389309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.759 [2024-07-20 17:51:42.389324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.389339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.759 [2024-07-20 17:51:42.389354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.389369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.759 [2024-07-20 17:51:42.389384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.389400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.759 [2024-07-20 17:51:42.389415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.389436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.759 [2024-07-20 17:51:42.389451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.389467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.759 [2024-07-20 17:51:42.389482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.389498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.759 [2024-07-20 17:51:42.389512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.389528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.759 [2024-07-20 17:51:42.389543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.759 [2024-07-20 17:51:42.389559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.389589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.389620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.389650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.389680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.389711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.389741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.389771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.389811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.389849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.389880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.389910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.389940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.389971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.389986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.760 [2024-07-20 17:51:42.390916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.760 [2024-07-20 17:51:42.390931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.761 [2024-07-20 17:51:42.390946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.761 [2024-07-20 17:51:42.390961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.761 [2024-07-20 17:51:42.390976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.761 [2024-07-20 17:51:42.390991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.761 [2024-07-20 17:51:42.391005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.761 [2024-07-20 17:51:42.391025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.761 [2024-07-20 17:51:42.391041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.761 [2024-07-20 17:51:42.391056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.761 [2024-07-20 17:51:42.391070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.761 [2024-07-20 17:51:42.391086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.761 [2024-07-20 17:51:42.391120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.761 [2024-07-20 17:51:42.391135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.761 [2024-07-20 17:51:42.391149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.761 [2024-07-20 17:51:42.391163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.761 [2024-07-20 17:51:42.391177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.761 [2024-07-20 17:51:42.391262] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1df6330 was disconnected and freed. reset controller. 00:16:07.761 [2024-07-20 17:51:42.392405] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:07.761 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.761 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:07.761 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.761 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.761 task offset: 43648 on job bdev=Nvme0n1 fails 00:16:07.761 00:16:07.761 Latency(us) 00:16:07.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.761 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:07.761 Job: Nvme0n1 ended in about 0.38 seconds with error 00:16:07.761 Verification LBA range: start 0x0 length 0x400 00:16:07.761 Nvme0n1 : 0.38 837.71 52.36 167.54 0.00 61957.56 2706.39 54758.97 00:16:07.761 =================================================================================================================== 00:16:07.761 Total : 837.71 52.36 167.54 0.00 61957.56 2706.39 54758.97 00:16:07.761 [2024-07-20 17:51:42.394277] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:07.761 [2024-07-20 17:51:42.394305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfbf00 (9): Bad file descriptor 00:16:07.761 17:51:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.761 17:51:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:07.761 [2024-07-20 17:51:42.449199] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:08.691 17:51:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 920694 00:16:08.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (920694) - No such process 00:16:08.691 17:51:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:08.691 17:51:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:08.691 17:51:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:08.691 17:51:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:08.691 17:51:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:08.691 17:51:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:08.691 17:51:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:08.691 17:51:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:08.691 { 00:16:08.691 "params": { 00:16:08.691 "name": "Nvme$subsystem", 00:16:08.691 "trtype": "$TEST_TRANSPORT", 00:16:08.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:08.691 "adrfam": "ipv4", 00:16:08.691 "trsvcid": "$NVMF_PORT", 00:16:08.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:08.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:08.691 "hdgst": ${hdgst:-false}, 00:16:08.691 "ddgst": ${ddgst:-false} 00:16:08.691 }, 00:16:08.691 "method": "bdev_nvme_attach_controller" 00:16:08.691 } 00:16:08.691 EOF 00:16:08.691 )") 00:16:08.691 17:51:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:08.691 17:51:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:08.691 17:51:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:08.691 17:51:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:08.691 "params": { 00:16:08.691 "name": "Nvme0", 00:16:08.691 "trtype": "tcp", 00:16:08.691 "traddr": "10.0.0.2", 00:16:08.691 "adrfam": "ipv4", 00:16:08.691 "trsvcid": "4420", 00:16:08.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:08.691 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:08.691 "hdgst": false, 00:16:08.691 "ddgst": false 00:16:08.691 }, 00:16:08.691 "method": "bdev_nvme_attach_controller" 00:16:08.691 }' 00:16:08.691 [2024-07-20 17:51:43.445210] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:08.691 [2024-07-20 17:51:43.445305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid920975 ] 00:16:08.691 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.949 [2024-07-20 17:51:43.505942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.949 [2024-07-20 17:51:43.591068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.206 Running I/O for 1 seconds... 00:16:10.576 00:16:10.576 Latency(us) 00:16:10.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.576 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:10.576 Verification LBA range: start 0x0 length 0x400 00:16:10.576 Nvme0n1 : 1.13 679.94 42.50 0.00 0.00 89790.39 21068.61 69905.07 00:16:10.576 =================================================================================================================== 00:16:10.576 Total : 679.94 42.50 0.00 0.00 89790.39 21068.61 69905.07 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:10.576 rmmod nvme_tcp 00:16:10.576 rmmod nvme_fabrics 00:16:10.576 rmmod nvme_keyring 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 920647 ']' 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 920647 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 920647 ']' 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 920647 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 920647 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 920647' 00:16:10.576 killing process with pid 920647 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 920647 00:16:10.576 17:51:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 920647 00:16:10.834 [2024-07-20 17:51:45.429016] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:10.834 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:10.834 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:10.834 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:10.834 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.834 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:10.834 17:51:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.834 17:51:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.834 17:51:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.734 17:51:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:12.734 17:51:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:12.734 00:16:12.734 real 0m8.388s 00:16:12.734 user 0m19.127s 00:16:12.734 sys 0m2.508s 00:16:12.734 17:51:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:12.734 17:51:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:12.734 ************************************ 00:16:12.734 END TEST nvmf_host_management 00:16:12.734 ************************************ 00:16:12.734 17:51:47 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:12.734 17:51:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:12.734 17:51:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:12.734 17:51:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:12.993 ************************************ 00:16:12.993 START TEST nvmf_lvol 00:16:12.993 ************************************ 00:16:12.993 17:51:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:12.993 * Looking for test storage... 00:16:12.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:12.993 17:51:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.993 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:12.993 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.993 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.993 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.993 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.993 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.993 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.993 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.993 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:12.994 17:51:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:14.894 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:14.894 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.894 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:14.895 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:14.895 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:14.895 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:15.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:16:15.153 00:16:15.153 --- 10.0.0.2 ping statistics --- 00:16:15.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.153 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:16:15.153 00:16:15.153 --- 10.0.0.1 ping statistics --- 00:16:15.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.153 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=923051 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 923051 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 923051 ']' 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:15.153 17:51:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:15.153 [2024-07-20 17:51:49.794244] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:15.153 [2024-07-20 17:51:49.794318] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.153 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.153 [2024-07-20 17:51:49.862577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:15.411 [2024-07-20 17:51:49.956338] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.411 [2024-07-20 17:51:49.956411] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.411 [2024-07-20 17:51:49.956432] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.411 [2024-07-20 17:51:49.956443] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.411 [2024-07-20 17:51:49.956452] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.411 [2024-07-20 17:51:49.956534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.411 [2024-07-20 17:51:49.956565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.411 [2024-07-20 17:51:49.956567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.411 17:51:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:15.411 17:51:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:15.411 17:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.411 17:51:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:15.411 17:51:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:15.411 17:51:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.411 17:51:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:15.668 [2024-07-20 17:51:50.311635] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.668 17:51:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:15.925 17:51:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:15.925 17:51:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:16.214 17:51:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:16.214 17:51:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:16.505 17:51:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:16.762 17:51:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=578de8ff-27e3-4513-a1e2-32d62b08e5e9 00:16:16.762 17:51:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 578de8ff-27e3-4513-a1e2-32d62b08e5e9 lvol 20 00:16:17.019 17:51:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=17634380-1994-487c-880d-d8b6135e2ed1 00:16:17.019 17:51:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:17.276 17:51:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 17634380-1994-487c-880d-d8b6135e2ed1 00:16:17.532 17:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:17.789 [2024-07-20 17:51:52.355514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.789 17:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:18.047 17:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=923475 00:16:18.047 17:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:18.047 17:51:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:18.047 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.982 17:51:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 17634380-1994-487c-880d-d8b6135e2ed1 MY_SNAPSHOT 00:16:19.240 17:51:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a09710c6-01ea-487f-b517-040caf3e9d11 00:16:19.240 17:51:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 17634380-1994-487c-880d-d8b6135e2ed1 30 00:16:19.498 17:51:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a09710c6-01ea-487f-b517-040caf3e9d11 MY_CLONE 00:16:19.755 17:51:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=871f0fd2-f2af-40d4-a71b-fd3371d6f978 00:16:19.755 17:51:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 871f0fd2-f2af-40d4-a71b-fd3371d6f978 00:16:20.013 17:51:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 923475 00:16:29.975 Initializing NVMe Controllers 00:16:29.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:29.975 Controller IO queue size 128, less than required. 00:16:29.975 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:29.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:29.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:29.975 Initialization complete. Launching workers. 00:16:29.975 ======================================================== 00:16:29.975 Latency(us) 00:16:29.975 Device Information : IOPS MiB/s Average min max 00:16:29.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10726.81 41.90 11938.11 509.15 71553.35 00:16:29.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10156.16 39.67 12608.63 1583.18 65373.97 00:16:29.975 ======================================================== 00:16:29.975 Total : 20882.97 81.57 12264.21 509.15 71553.35 00:16:29.975 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 17634380-1994-487c-880d-d8b6135e2ed1 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 578de8ff-27e3-4513-a1e2-32d62b08e5e9 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:29.975 rmmod nvme_tcp 00:16:29.975 rmmod nvme_fabrics 00:16:29.975 rmmod nvme_keyring 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 923051 ']' 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 923051 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 923051 ']' 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 923051 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 923051 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 923051' 00:16:29.975 killing process with pid 923051 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 923051 00:16:29.975 17:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 923051 00:16:29.975 17:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:29.975 17:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:29.975 17:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:29.975 17:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.975 17:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:29.975 17:52:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.975 17:52:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.975 17:52:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:31.872 00:16:31.872 real 0m18.684s 00:16:31.872 user 0m59.693s 00:16:31.872 sys 0m7.219s 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:31.872 ************************************ 00:16:31.872 END TEST nvmf_lvol 00:16:31.872 ************************************ 00:16:31.872 17:52:06 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:31.872 17:52:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:31.872 17:52:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:31.872 17:52:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:31.872 ************************************ 00:16:31.872 START TEST nvmf_lvs_grow 00:16:31.872 ************************************ 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:31.872 * Looking for test storage... 00:16:31.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.872 17:52:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:31.873 17:52:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:33.770 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:33.770 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:33.770 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:33.771 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:33.771 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:33.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:16:33.771 00:16:33.771 --- 10.0.0.2 ping statistics --- 00:16:33.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.771 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:16:33.771 00:16:33.771 --- 10.0.0.1 ping statistics --- 00:16:33.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.771 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=926726 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 926726 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 926726 ']' 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:33.771 17:52:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:33.771 [2024-07-20 17:52:08.485383] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:33.771 [2024-07-20 17:52:08.485470] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.771 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.771 [2024-07-20 17:52:08.549478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.029 [2024-07-20 17:52:08.635942] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.029 [2024-07-20 17:52:08.636004] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.029 [2024-07-20 17:52:08.636017] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.029 [2024-07-20 17:52:08.636029] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.029 [2024-07-20 17:52:08.636040] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.029 [2024-07-20 17:52:08.636081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.029 17:52:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:34.029 17:52:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:34.029 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:34.029 17:52:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:34.029 17:52:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:34.029 17:52:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.029 17:52:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:34.287 [2024-07-20 17:52:09.045292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.287 17:52:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:34.287 17:52:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:34.287 17:52:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:34.287 17:52:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:34.546 ************************************ 00:16:34.546 START TEST lvs_grow_clean 00:16:34.546 ************************************ 00:16:34.546 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:34.546 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:34.546 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:34.546 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:34.546 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:34.546 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:34.546 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:34.546 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:34.546 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:34.546 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:34.804 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:34.804 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:35.063 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b6005973-e83d-4fad-bc6a-fa810cd9a4c4 00:16:35.063 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6005973-e83d-4fad-bc6a-fa810cd9a4c4 00:16:35.063 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:35.321 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:35.321 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:35.321 17:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b6005973-e83d-4fad-bc6a-fa810cd9a4c4 lvol 150 00:16:35.579 17:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a78d2a58-f63b-450c-9633-554c416ed65e 00:16:35.579 17:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:35.579 17:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:35.836 [2024-07-20 17:52:10.442256] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:35.836 [2024-07-20 17:52:10.442335] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:35.836 true 00:16:35.836 17:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6005973-e83d-4fad-bc6a-fa810cd9a4c4 00:16:35.836 17:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:36.098 17:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:36.098 17:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:36.355 17:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a78d2a58-f63b-450c-9633-554c416ed65e 00:16:36.613 17:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:36.871 [2024-07-20 17:52:11.513489] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.871 17:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:37.128 17:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=927168 00:16:37.128 17:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:37.128 17:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:37.128 17:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 927168 /var/tmp/bdevperf.sock 00:16:37.128 17:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 927168 ']' 00:16:37.128 17:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:37.128 17:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:37.128 17:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:37.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:37.128 17:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:37.128 17:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:37.128 [2024-07-20 17:52:11.838157] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:37.128 [2024-07-20 17:52:11.838225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927168 ] 00:16:37.128 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.128 [2024-07-20 17:52:11.898945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.386 [2024-07-20 17:52:11.989895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.386 17:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:37.386 17:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:37.386 17:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:37.949 Nvme0n1 00:16:37.949 17:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:38.206 [ 00:16:38.206 { 00:16:38.206 "name": "Nvme0n1", 00:16:38.206 "aliases": [ 00:16:38.206 "a78d2a58-f63b-450c-9633-554c416ed65e" 00:16:38.206 ], 00:16:38.206 "product_name": "NVMe disk", 00:16:38.206 "block_size": 4096, 00:16:38.206 "num_blocks": 38912, 00:16:38.206 "uuid": "a78d2a58-f63b-450c-9633-554c416ed65e", 00:16:38.206 "assigned_rate_limits": { 00:16:38.206 "rw_ios_per_sec": 0, 00:16:38.206 "rw_mbytes_per_sec": 0, 00:16:38.206 "r_mbytes_per_sec": 0, 00:16:38.206 "w_mbytes_per_sec": 0 00:16:38.206 }, 00:16:38.206 "claimed": false, 00:16:38.206 "zoned": false, 00:16:38.206 "supported_io_types": { 00:16:38.206 "read": true, 00:16:38.206 "write": true, 00:16:38.206 "unmap": true, 00:16:38.206 "write_zeroes": true, 00:16:38.206 "flush": true, 00:16:38.206 "reset": true, 00:16:38.206 "compare": true, 00:16:38.206 "compare_and_write": true, 00:16:38.206 "abort": true, 00:16:38.206 "nvme_admin": true, 00:16:38.206 "nvme_io": true 00:16:38.206 }, 00:16:38.206 "memory_domains": [ 00:16:38.206 { 00:16:38.206 "dma_device_id": "system", 00:16:38.206 "dma_device_type": 1 00:16:38.206 } 00:16:38.206 ], 00:16:38.206 "driver_specific": { 00:16:38.206 "nvme": [ 00:16:38.206 { 00:16:38.206 "trid": { 00:16:38.206 "trtype": "TCP", 00:16:38.206 "adrfam": "IPv4", 00:16:38.206 "traddr": "10.0.0.2", 00:16:38.206 "trsvcid": "4420", 00:16:38.206 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:38.206 }, 00:16:38.206 "ctrlr_data": { 00:16:38.206 "cntlid": 1, 00:16:38.206 "vendor_id": "0x8086", 00:16:38.206 "model_number": "SPDK bdev Controller", 00:16:38.206 "serial_number": "SPDK0", 00:16:38.206 "firmware_revision": "24.05.1", 00:16:38.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:38.206 "oacs": { 00:16:38.206 "security": 0, 00:16:38.206 "format": 0, 00:16:38.206 "firmware": 0, 00:16:38.206 "ns_manage": 0 00:16:38.206 }, 00:16:38.206 "multi_ctrlr": true, 00:16:38.206 "ana_reporting": false 00:16:38.206 }, 00:16:38.206 "vs": { 00:16:38.206 "nvme_version": "1.3" 00:16:38.206 }, 00:16:38.206 "ns_data": { 00:16:38.206 "id": 1, 00:16:38.207 "can_share": true 00:16:38.207 } 00:16:38.207 } 00:16:38.207 ], 00:16:38.207 "mp_policy": "active_passive" 00:16:38.207 } 00:16:38.207 } 00:16:38.207 ] 00:16:38.207 17:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=927302 00:16:38.207 17:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:38.207 17:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:38.207 Running I/O for 10 seconds... 00:16:39.137 Latency(us) 00:16:39.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.137 Nvme0n1 : 1.00 14140.00 55.23 0.00 0.00 0.00 0.00 0.00 00:16:39.137 =================================================================================================================== 00:16:39.137 Total : 14140.00 55.23 0.00 0.00 0.00 0.00 0.00 00:16:39.137 00:16:40.070 17:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b6005973-e83d-4fad-bc6a-fa810cd9a4c4 00:16:40.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.328 Nvme0n1 : 2.00 14468.50 56.52 0.00 0.00 0.00 0.00 0.00 00:16:40.328 =================================================================================================================== 00:16:40.328 Total : 14468.50 56.52 0.00 0.00 0.00 0.00 0.00 00:16:40.328 00:16:40.328 true 00:16:40.328 17:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6005973-e83d-4fad-bc6a-fa810cd9a4c4 00:16:40.328 17:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:40.586 17:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:40.587 17:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:40.587 17:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 927302 00:16:41.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.153 Nvme0n1 : 3.00 14462.67 56.49 0.00 0.00 0.00 0.00 0.00 00:16:41.153 =================================================================================================================== 00:16:41.153 Total : 14462.67 56.49 0.00 0.00 0.00 0.00 0.00 00:16:41.153 00:16:42.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.526 Nvme0n1 : 4.00 14466.25 56.51 0.00 0.00 0.00 0.00 0.00 00:16:42.526 =================================================================================================================== 00:16:42.526 Total : 14466.25 56.51 0.00 0.00 0.00 0.00 0.00 00:16:42.526 00:16:43.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.458 Nvme0n1 : 5.00 14565.60 56.90 0.00 0.00 0.00 0.00 0.00 00:16:43.458 =================================================================================================================== 00:16:43.458 Total : 14565.60 56.90 0.00 0.00 0.00 0.00 0.00 00:16:43.458 00:16:44.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.389 Nvme0n1 : 6.00 14591.33 57.00 0.00 0.00 0.00 0.00 0.00 00:16:44.389 =================================================================================================================== 00:16:44.389 Total : 14591.33 57.00 0.00 0.00 0.00 0.00 0.00 00:16:44.389 00:16:45.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:45.322 Nvme0n1 : 7.00 14675.57 57.33 0.00 0.00 0.00 0.00 0.00 00:16:45.322 =================================================================================================================== 00:16:45.322 Total : 14675.57 57.33 0.00 0.00 0.00 0.00 0.00 00:16:45.322 00:16:46.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.255 Nvme0n1 : 8.00 14671.50 57.31 0.00 0.00 0.00 0.00 0.00 00:16:46.255 =================================================================================================================== 00:16:46.255 Total : 14671.50 57.31 0.00 0.00 0.00 0.00 0.00 00:16:46.255 00:16:47.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.188 Nvme0n1 : 9.00 14676.89 57.33 0.00 0.00 0.00 0.00 0.00 00:16:47.188 =================================================================================================================== 00:16:47.188 Total : 14676.89 57.33 0.00 0.00 0.00 0.00 0.00 00:16:47.188 00:16:48.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.561 Nvme0n1 : 10.00 14733.70 57.55 0.00 0.00 0.00 0.00 0.00 00:16:48.561 =================================================================================================================== 00:16:48.561 Total : 14733.70 57.55 0.00 0.00 0.00 0.00 0.00 00:16:48.561 00:16:48.561 00:16:48.561 Latency(us) 00:16:48.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.562 Nvme0n1 : 10.01 14729.54 57.54 0.00 0.00 8683.66 5509.88 15534.46 00:16:48.562 =================================================================================================================== 00:16:48.562 Total : 14729.54 57.54 0.00 0.00 8683.66 5509.88 15534.46 00:16:48.562 0 00:16:48.562 17:52:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 927168 00:16:48.562 17:52:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 927168 ']' 00:16:48.562 17:52:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 927168 00:16:48.562 17:52:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:16:48.562 17:52:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:48.562 17:52:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 927168 00:16:48.562 17:52:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:48.562 17:52:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:48.562 17:52:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 927168' 00:16:48.562 killing process with pid 927168 00:16:48.562 17:52:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 927168 00:16:48.562 Received shutdown signal, test time was about 10.000000 seconds 00:16:48.562 00:16:48.562 Latency(us) 00:16:48.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.562 =================================================================================================================== 00:16:48.562 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:48.562 17:52:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 927168 00:16:48.562 17:52:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:48.819 17:52:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:49.101 17:52:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6005973-e83d-4fad-bc6a-fa810cd9a4c4 00:16:49.101 17:52:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:49.366 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:49.366 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:49.366 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:49.623 [2024-07-20 17:52:24.231589] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:49.623 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6005973-e83d-4fad-bc6a-fa810cd9a4c4 00:16:49.623 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:49.623 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6005973-e83d-4fad-bc6a-fa810cd9a4c4 00:16:49.623 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.623 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.623 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.623 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.623 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.623 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.623 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.623 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:49.623 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6005973-e83d-4fad-bc6a-fa810cd9a4c4 00:16:49.881 request: 00:16:49.881 { 00:16:49.881 "uuid": "b6005973-e83d-4fad-bc6a-fa810cd9a4c4", 00:16:49.881 "method": "bdev_lvol_get_lvstores", 00:16:49.881 "req_id": 1 00:16:49.881 } 00:16:49.881 Got JSON-RPC error response 00:16:49.881 response: 00:16:49.881 { 00:16:49.881 "code": -19, 00:16:49.881 "message": "No such device" 00:16:49.881 } 00:16:49.881 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:49.881 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:49.881 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:49.881 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:49.881 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:50.138 aio_bdev 00:16:50.138 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a78d2a58-f63b-450c-9633-554c416ed65e 00:16:50.139 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=a78d2a58-f63b-450c-9633-554c416ed65e 00:16:50.139 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:50.139 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:16:50.139 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:50.139 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:50.139 17:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:50.396 17:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a78d2a58-f63b-450c-9633-554c416ed65e -t 2000 00:16:50.654 [ 00:16:50.654 { 00:16:50.654 "name": "a78d2a58-f63b-450c-9633-554c416ed65e", 00:16:50.654 "aliases": [ 00:16:50.654 "lvs/lvol" 00:16:50.654 ], 00:16:50.654 "product_name": "Logical Volume", 00:16:50.654 "block_size": 4096, 00:16:50.654 "num_blocks": 38912, 00:16:50.654 "uuid": "a78d2a58-f63b-450c-9633-554c416ed65e", 00:16:50.654 "assigned_rate_limits": { 00:16:50.654 "rw_ios_per_sec": 0, 00:16:50.654 "rw_mbytes_per_sec": 0, 00:16:50.654 "r_mbytes_per_sec": 0, 00:16:50.654 "w_mbytes_per_sec": 0 00:16:50.654 }, 00:16:50.654 "claimed": false, 00:16:50.654 "zoned": false, 00:16:50.654 "supported_io_types": { 00:16:50.654 "read": true, 00:16:50.654 "write": true, 00:16:50.654 "unmap": true, 00:16:50.654 "write_zeroes": true, 00:16:50.654 "flush": false, 00:16:50.654 "reset": true, 00:16:50.654 "compare": false, 00:16:50.654 "compare_and_write": false, 00:16:50.654 "abort": false, 00:16:50.654 "nvme_admin": false, 00:16:50.654 "nvme_io": false 00:16:50.654 }, 00:16:50.654 "driver_specific": { 00:16:50.654 "lvol": { 00:16:50.654 "lvol_store_uuid": "b6005973-e83d-4fad-bc6a-fa810cd9a4c4", 00:16:50.654 "base_bdev": "aio_bdev", 00:16:50.654 "thin_provision": false, 00:16:50.654 "num_allocated_clusters": 38, 00:16:50.654 "snapshot": false, 00:16:50.654 "clone": false, 00:16:50.654 "esnap_clone": false 00:16:50.654 } 00:16:50.654 } 00:16:50.654 } 00:16:50.654 ] 00:16:50.654 17:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:16:50.654 17:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6005973-e83d-4fad-bc6a-fa810cd9a4c4 00:16:50.654 17:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:50.912 17:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:50.912 17:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6005973-e83d-4fad-bc6a-fa810cd9a4c4 00:16:50.912 17:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:51.170 17:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:51.170 17:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a78d2a58-f63b-450c-9633-554c416ed65e 00:16:51.428 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b6005973-e83d-4fad-bc6a-fa810cd9a4c4 00:16:51.686 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:51.944 00:16:51.944 real 0m17.525s 00:16:51.944 user 0m16.993s 00:16:51.944 sys 0m1.910s 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:51.944 ************************************ 00:16:51.944 END TEST lvs_grow_clean 00:16:51.944 ************************************ 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:51.944 ************************************ 00:16:51.944 START TEST lvs_grow_dirty 00:16:51.944 ************************************ 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:51.944 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:52.201 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:52.201 17:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:52.458 17:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:16:52.458 17:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:16:52.458 17:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:52.715 17:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:52.715 17:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:52.715 17:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 lvol 150 00:16:52.972 17:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=325927e9-c64e-4d2b-85b5-416db0b5fa1a 00:16:52.972 17:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:52.972 17:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:53.230 [2024-07-20 17:52:27.969125] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:53.230 [2024-07-20 17:52:27.969206] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:53.230 true 00:16:53.230 17:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:16:53.230 17:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:53.487 17:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:53.487 17:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:53.744 17:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 325927e9-c64e-4d2b-85b5-416db0b5fa1a 00:16:54.002 17:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:54.259 [2024-07-20 17:52:29.012272] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.259 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:54.517 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=929214 00:16:54.517 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:54.517 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:54.517 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 929214 /var/tmp/bdevperf.sock 00:16:54.517 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 929214 ']' 00:16:54.517 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.517 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:54.517 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.517 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:54.517 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:54.775 [2024-07-20 17:52:29.317760] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:54.775 [2024-07-20 17:52:29.317870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929214 ] 00:16:54.775 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.775 [2024-07-20 17:52:29.383376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.775 [2024-07-20 17:52:29.473922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.032 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:55.032 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:16:55.032 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:55.290 Nvme0n1 00:16:55.290 17:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:55.548 [ 00:16:55.548 { 00:16:55.548 "name": "Nvme0n1", 00:16:55.548 "aliases": [ 00:16:55.548 "325927e9-c64e-4d2b-85b5-416db0b5fa1a" 00:16:55.548 ], 00:16:55.548 "product_name": "NVMe disk", 00:16:55.548 "block_size": 4096, 00:16:55.548 "num_blocks": 38912, 00:16:55.548 "uuid": "325927e9-c64e-4d2b-85b5-416db0b5fa1a", 00:16:55.548 "assigned_rate_limits": { 00:16:55.548 "rw_ios_per_sec": 0, 00:16:55.548 "rw_mbytes_per_sec": 0, 00:16:55.548 "r_mbytes_per_sec": 0, 00:16:55.548 "w_mbytes_per_sec": 0 00:16:55.548 }, 00:16:55.548 "claimed": false, 00:16:55.548 "zoned": false, 00:16:55.548 "supported_io_types": { 00:16:55.548 "read": true, 00:16:55.548 "write": true, 00:16:55.548 "unmap": true, 00:16:55.548 "write_zeroes": true, 00:16:55.548 "flush": true, 00:16:55.548 "reset": true, 00:16:55.548 "compare": true, 00:16:55.548 "compare_and_write": true, 00:16:55.548 "abort": true, 00:16:55.548 "nvme_admin": true, 00:16:55.548 "nvme_io": true 00:16:55.548 }, 00:16:55.548 "memory_domains": [ 00:16:55.548 { 00:16:55.548 "dma_device_id": "system", 00:16:55.548 "dma_device_type": 1 00:16:55.548 } 00:16:55.548 ], 00:16:55.548 "driver_specific": { 00:16:55.548 "nvme": [ 00:16:55.548 { 00:16:55.548 "trid": { 00:16:55.548 "trtype": "TCP", 00:16:55.548 "adrfam": "IPv4", 00:16:55.548 "traddr": "10.0.0.2", 00:16:55.548 "trsvcid": "4420", 00:16:55.548 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:55.548 }, 00:16:55.548 "ctrlr_data": { 00:16:55.548 "cntlid": 1, 00:16:55.548 "vendor_id": "0x8086", 00:16:55.548 "model_number": "SPDK bdev Controller", 00:16:55.548 "serial_number": "SPDK0", 00:16:55.548 "firmware_revision": "24.05.1", 00:16:55.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:55.548 "oacs": { 00:16:55.548 "security": 0, 00:16:55.548 "format": 0, 00:16:55.548 "firmware": 0, 00:16:55.548 "ns_manage": 0 00:16:55.548 }, 00:16:55.548 "multi_ctrlr": true, 00:16:55.548 "ana_reporting": false 00:16:55.548 }, 00:16:55.548 "vs": { 00:16:55.548 "nvme_version": "1.3" 00:16:55.548 }, 00:16:55.548 "ns_data": { 00:16:55.548 "id": 1, 00:16:55.548 "can_share": true 00:16:55.548 } 00:16:55.548 } 00:16:55.548 ], 00:16:55.548 "mp_policy": "active_passive" 00:16:55.548 } 00:16:55.548 } 00:16:55.548 ] 00:16:55.548 17:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=929351 00:16:55.548 17:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:55.548 17:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:55.548 Running I/O for 10 seconds... 00:16:56.920 Latency(us) 00:16:56.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.920 Nvme0n1 : 1.00 14128.00 55.19 0.00 0.00 0.00 0.00 0.00 00:16:56.920 =================================================================================================================== 00:16:56.920 Total : 14128.00 55.19 0.00 0.00 0.00 0.00 0.00 00:16:56.920 00:16:57.486 17:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:16:57.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.753 Nvme0n1 : 2.00 14360.00 56.09 0.00 0.00 0.00 0.00 0.00 00:16:57.753 =================================================================================================================== 00:16:57.753 Total : 14360.00 56.09 0.00 0.00 0.00 0.00 0.00 00:16:57.753 00:16:57.753 true 00:16:57.753 17:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:16:57.753 17:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:58.011 17:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:58.011 17:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:58.011 17:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 929351 00:16:58.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.589 Nvme0n1 : 3.00 14423.33 56.34 0.00 0.00 0.00 0.00 0.00 00:16:58.589 =================================================================================================================== 00:16:58.589 Total : 14423.33 56.34 0.00 0.00 0.00 0.00 0.00 00:16:58.589 00:16:59.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.519 Nvme0n1 : 4.00 14540.00 56.80 0.00 0.00 0.00 0.00 0.00 00:16:59.519 =================================================================================================================== 00:16:59.519 Total : 14540.00 56.80 0.00 0.00 0.00 0.00 0.00 00:16:59.519 00:17:00.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.890 Nvme0n1 : 5.00 14576.00 56.94 0.00 0.00 0.00 0.00 0.00 00:17:00.890 =================================================================================================================== 00:17:00.890 Total : 14576.00 56.94 0.00 0.00 0.00 0.00 0.00 00:17:00.890 00:17:01.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.824 Nvme0n1 : 6.00 14600.00 57.03 0.00 0.00 0.00 0.00 0.00 00:17:01.824 =================================================================================================================== 00:17:01.824 Total : 14600.00 57.03 0.00 0.00 0.00 0.00 0.00 00:17:01.824 00:17:02.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.757 Nvme0n1 : 7.00 14666.00 57.29 0.00 0.00 0.00 0.00 0.00 00:17:02.757 =================================================================================================================== 00:17:02.757 Total : 14666.00 57.29 0.00 0.00 0.00 0.00 0.00 00:17:02.757 00:17:03.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.688 Nvme0n1 : 8.00 14678.00 57.34 0.00 0.00 0.00 0.00 0.00 00:17:03.688 =================================================================================================================== 00:17:03.688 Total : 14678.00 57.34 0.00 0.00 0.00 0.00 0.00 00:17:03.688 00:17:04.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.644 Nvme0n1 : 9.00 14696.89 57.41 0.00 0.00 0.00 0.00 0.00 00:17:04.644 =================================================================================================================== 00:17:04.644 Total : 14696.89 57.41 0.00 0.00 0.00 0.00 0.00 00:17:04.644 00:17:05.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.576 Nvme0n1 : 10.00 14695.00 57.40 0.00 0.00 0.00 0.00 0.00 00:17:05.576 =================================================================================================================== 00:17:05.576 Total : 14695.00 57.40 0.00 0.00 0.00 0.00 0.00 00:17:05.576 00:17:05.576 00:17:05.576 Latency(us) 00:17:05.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.576 Nvme0n1 : 10.01 14698.39 57.42 0.00 0.00 8702.74 2305.90 16505.36 00:17:05.576 =================================================================================================================== 00:17:05.576 Total : 14698.39 57.42 0.00 0.00 8702.74 2305.90 16505.36 00:17:05.576 0 00:17:05.576 17:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 929214 00:17:05.576 17:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 929214 ']' 00:17:05.576 17:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 929214 00:17:05.576 17:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:05.576 17:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:05.576 17:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 929214 00:17:05.834 17:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:05.834 17:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:05.834 17:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 929214' 00:17:05.834 killing process with pid 929214 00:17:05.834 17:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 929214 00:17:05.834 Received shutdown signal, test time was about 10.000000 seconds 00:17:05.834 00:17:05.834 Latency(us) 00:17:05.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.834 =================================================================================================================== 00:17:05.834 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.834 17:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 929214 00:17:05.834 17:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:06.091 17:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:06.348 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:17:06.348 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:06.605 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:06.605 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:06.605 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 926726 00:17:06.605 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 926726 00:17:06.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 926726 Killed "${NVMF_APP[@]}" "$@" 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=930671 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 930671 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 930671 ']' 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:06.863 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:06.863 [2024-07-20 17:52:41.456523] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:06.863 [2024-07-20 17:52:41.456603] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.863 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.863 [2024-07-20 17:52:41.525731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.863 [2024-07-20 17:52:41.616698] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.863 [2024-07-20 17:52:41.616761] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.863 [2024-07-20 17:52:41.616778] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.863 [2024-07-20 17:52:41.616800] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.863 [2024-07-20 17:52:41.616814] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.863 [2024-07-20 17:52:41.616846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.121 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:07.121 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:07.121 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:07.121 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.121 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:07.121 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.121 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:07.378 [2024-07-20 17:52:41.967835] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:07.378 [2024-07-20 17:52:41.967978] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:07.378 [2024-07-20 17:52:41.968036] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:07.378 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:07.378 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 325927e9-c64e-4d2b-85b5-416db0b5fa1a 00:17:07.378 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=325927e9-c64e-4d2b-85b5-416db0b5fa1a 00:17:07.378 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:07.378 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:07.378 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:07.378 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:07.378 17:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:07.636 17:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 325927e9-c64e-4d2b-85b5-416db0b5fa1a -t 2000 00:17:07.892 [ 00:17:07.892 { 00:17:07.892 "name": "325927e9-c64e-4d2b-85b5-416db0b5fa1a", 00:17:07.892 "aliases": [ 00:17:07.892 "lvs/lvol" 00:17:07.892 ], 00:17:07.892 "product_name": "Logical Volume", 00:17:07.892 "block_size": 4096, 00:17:07.892 "num_blocks": 38912, 00:17:07.892 "uuid": "325927e9-c64e-4d2b-85b5-416db0b5fa1a", 00:17:07.892 "assigned_rate_limits": { 00:17:07.892 "rw_ios_per_sec": 0, 00:17:07.892 "rw_mbytes_per_sec": 0, 00:17:07.892 "r_mbytes_per_sec": 0, 00:17:07.892 "w_mbytes_per_sec": 0 00:17:07.892 }, 00:17:07.892 "claimed": false, 00:17:07.892 "zoned": false, 00:17:07.892 "supported_io_types": { 00:17:07.892 "read": true, 00:17:07.892 "write": true, 00:17:07.892 "unmap": true, 00:17:07.892 "write_zeroes": true, 00:17:07.892 "flush": false, 00:17:07.892 "reset": true, 00:17:07.892 "compare": false, 00:17:07.892 "compare_and_write": false, 00:17:07.892 "abort": false, 00:17:07.892 "nvme_admin": false, 00:17:07.892 "nvme_io": false 00:17:07.892 }, 00:17:07.892 "driver_specific": { 00:17:07.892 "lvol": { 00:17:07.892 "lvol_store_uuid": "1883d2ad-b7c4-45a6-9869-53e9f6c8df46", 00:17:07.892 "base_bdev": "aio_bdev", 00:17:07.892 "thin_provision": false, 00:17:07.892 "num_allocated_clusters": 38, 00:17:07.892 "snapshot": false, 00:17:07.892 "clone": false, 00:17:07.892 "esnap_clone": false 00:17:07.892 } 00:17:07.893 } 00:17:07.893 } 00:17:07.893 ] 00:17:07.893 17:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:07.893 17:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:17:07.893 17:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:08.150 17:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:08.150 17:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:17:08.150 17:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:08.407 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:08.407 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:08.663 [2024-07-20 17:52:43.272899] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:08.663 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:17:08.663 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:08.664 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:17:08.664 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.664 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.664 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.664 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.664 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.664 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.664 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.664 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:08.664 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:17:08.921 request: 00:17:08.921 { 00:17:08.921 "uuid": "1883d2ad-b7c4-45a6-9869-53e9f6c8df46", 00:17:08.921 "method": "bdev_lvol_get_lvstores", 00:17:08.921 "req_id": 1 00:17:08.921 } 00:17:08.921 Got JSON-RPC error response 00:17:08.921 response: 00:17:08.921 { 00:17:08.921 "code": -19, 00:17:08.921 "message": "No such device" 00:17:08.921 } 00:17:08.921 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:08.921 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:08.921 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:08.921 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:08.921 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:09.179 aio_bdev 00:17:09.179 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 325927e9-c64e-4d2b-85b5-416db0b5fa1a 00:17:09.179 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=325927e9-c64e-4d2b-85b5-416db0b5fa1a 00:17:09.179 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:09.179 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:09.179 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:09.179 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:09.179 17:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:09.436 17:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 325927e9-c64e-4d2b-85b5-416db0b5fa1a -t 2000 00:17:09.693 [ 00:17:09.693 { 00:17:09.693 "name": "325927e9-c64e-4d2b-85b5-416db0b5fa1a", 00:17:09.693 "aliases": [ 00:17:09.693 "lvs/lvol" 00:17:09.693 ], 00:17:09.693 "product_name": "Logical Volume", 00:17:09.693 "block_size": 4096, 00:17:09.693 "num_blocks": 38912, 00:17:09.693 "uuid": "325927e9-c64e-4d2b-85b5-416db0b5fa1a", 00:17:09.693 "assigned_rate_limits": { 00:17:09.693 "rw_ios_per_sec": 0, 00:17:09.693 "rw_mbytes_per_sec": 0, 00:17:09.693 "r_mbytes_per_sec": 0, 00:17:09.694 "w_mbytes_per_sec": 0 00:17:09.694 }, 00:17:09.694 "claimed": false, 00:17:09.694 "zoned": false, 00:17:09.694 "supported_io_types": { 00:17:09.694 "read": true, 00:17:09.694 "write": true, 00:17:09.694 "unmap": true, 00:17:09.694 "write_zeroes": true, 00:17:09.694 "flush": false, 00:17:09.694 "reset": true, 00:17:09.694 "compare": false, 00:17:09.694 "compare_and_write": false, 00:17:09.694 "abort": false, 00:17:09.694 "nvme_admin": false, 00:17:09.694 "nvme_io": false 00:17:09.694 }, 00:17:09.694 "driver_specific": { 00:17:09.694 "lvol": { 00:17:09.694 "lvol_store_uuid": "1883d2ad-b7c4-45a6-9869-53e9f6c8df46", 00:17:09.694 "base_bdev": "aio_bdev", 00:17:09.694 "thin_provision": false, 00:17:09.694 "num_allocated_clusters": 38, 00:17:09.694 "snapshot": false, 00:17:09.694 "clone": false, 00:17:09.694 "esnap_clone": false 00:17:09.694 } 00:17:09.694 } 00:17:09.694 } 00:17:09.694 ] 00:17:09.694 17:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:09.694 17:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:17:09.694 17:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:09.951 17:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:09.951 17:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:17:09.951 17:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:10.209 17:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:10.209 17:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 325927e9-c64e-4d2b-85b5-416db0b5fa1a 00:17:10.467 17:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1883d2ad-b7c4-45a6-9869-53e9f6c8df46 00:17:10.725 17:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:10.983 17:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:10.983 00:17:10.983 real 0m19.054s 00:17:10.983 user 0m48.214s 00:17:10.983 sys 0m4.753s 00:17:10.983 17:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:10.983 17:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:10.983 ************************************ 00:17:10.983 END TEST lvs_grow_dirty 00:17:10.983 ************************************ 00:17:10.983 17:52:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:10.983 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:10.983 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:10.983 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:10.983 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:10.983 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:10.983 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:10.983 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:10.983 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:10.983 nvmf_trace.0 00:17:10.983 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:11.242 rmmod nvme_tcp 00:17:11.242 rmmod nvme_fabrics 00:17:11.242 rmmod nvme_keyring 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 930671 ']' 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 930671 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 930671 ']' 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 930671 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 930671 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 930671' 00:17:11.242 killing process with pid 930671 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 930671 00:17:11.242 17:52:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 930671 00:17:11.500 17:52:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:11.500 17:52:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:11.500 17:52:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:11.500 17:52:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.500 17:52:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.500 17:52:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.500 17:52:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.500 17:52:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.399 17:52:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:13.399 00:17:13.399 real 0m41.859s 00:17:13.399 user 1m10.988s 00:17:13.399 sys 0m8.474s 00:17:13.399 17:52:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:13.399 17:52:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:13.399 ************************************ 00:17:13.399 END TEST nvmf_lvs_grow 00:17:13.399 ************************************ 00:17:13.399 17:52:48 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:13.399 17:52:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:13.399 17:52:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:13.399 17:52:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:13.399 ************************************ 00:17:13.399 START TEST nvmf_bdev_io_wait 00:17:13.399 ************************************ 00:17:13.399 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:13.657 * Looking for test storage... 00:17:13.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.657 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:13.658 17:52:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.558 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:15.559 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:15.559 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:15.559 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:15.559 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:15.559 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:15.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:17:15.816 00:17:15.816 --- 10.0.0.2 ping statistics --- 00:17:15.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.816 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:17:15.816 00:17:15.816 --- 10.0.0.1 ping statistics --- 00:17:15.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.816 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=933193 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 933193 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 933193 ']' 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:15.816 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:15.816 [2024-07-20 17:52:50.486893] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:15.816 [2024-07-20 17:52:50.486969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.816 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.816 [2024-07-20 17:52:50.556185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.073 [2024-07-20 17:52:50.648423] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.073 [2024-07-20 17:52:50.648482] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.073 [2024-07-20 17:52:50.648500] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.073 [2024-07-20 17:52:50.648513] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.073 [2024-07-20 17:52:50.648525] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.073 [2024-07-20 17:52:50.648610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.073 [2024-07-20 17:52:50.648682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.073 [2024-07-20 17:52:50.648781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.073 [2024-07-20 17:52:50.648783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:16.073 [2024-07-20 17:52:50.796885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:16.073 Malloc0 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.073 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:16.073 [2024-07-20 17:52:50.866314] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=933221 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=933223 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:16.332 { 00:17:16.332 "params": { 00:17:16.332 "name": "Nvme$subsystem", 00:17:16.332 "trtype": "$TEST_TRANSPORT", 00:17:16.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:16.332 "adrfam": "ipv4", 00:17:16.332 "trsvcid": "$NVMF_PORT", 00:17:16.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:16.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:16.332 "hdgst": ${hdgst:-false}, 00:17:16.332 "ddgst": ${ddgst:-false} 00:17:16.332 }, 00:17:16.332 "method": "bdev_nvme_attach_controller" 00:17:16.332 } 00:17:16.332 EOF 00:17:16.332 )") 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=933225 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:16.332 { 00:17:16.332 "params": { 00:17:16.332 "name": "Nvme$subsystem", 00:17:16.332 "trtype": "$TEST_TRANSPORT", 00:17:16.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:16.332 "adrfam": "ipv4", 00:17:16.332 "trsvcid": "$NVMF_PORT", 00:17:16.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:16.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:16.332 "hdgst": ${hdgst:-false}, 00:17:16.332 "ddgst": ${ddgst:-false} 00:17:16.332 }, 00:17:16.332 "method": "bdev_nvme_attach_controller" 00:17:16.332 } 00:17:16.332 EOF 00:17:16.332 )") 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=933228 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:16.332 { 00:17:16.332 "params": { 00:17:16.332 "name": "Nvme$subsystem", 00:17:16.332 "trtype": "$TEST_TRANSPORT", 00:17:16.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:16.332 "adrfam": "ipv4", 00:17:16.332 "trsvcid": "$NVMF_PORT", 00:17:16.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:16.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:16.332 "hdgst": ${hdgst:-false}, 00:17:16.332 "ddgst": ${ddgst:-false} 00:17:16.332 }, 00:17:16.332 "method": "bdev_nvme_attach_controller" 00:17:16.332 } 00:17:16.332 EOF 00:17:16.332 )") 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:16.332 { 00:17:16.332 "params": { 00:17:16.332 "name": "Nvme$subsystem", 00:17:16.332 "trtype": "$TEST_TRANSPORT", 00:17:16.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:16.332 "adrfam": "ipv4", 00:17:16.332 "trsvcid": "$NVMF_PORT", 00:17:16.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:16.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:16.332 "hdgst": ${hdgst:-false}, 00:17:16.332 "ddgst": ${ddgst:-false} 00:17:16.332 }, 00:17:16.332 "method": "bdev_nvme_attach_controller" 00:17:16.332 } 00:17:16.332 EOF 00:17:16.332 )") 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 933221 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:16.332 "params": { 00:17:16.332 "name": "Nvme1", 00:17:16.332 "trtype": "tcp", 00:17:16.332 "traddr": "10.0.0.2", 00:17:16.332 "adrfam": "ipv4", 00:17:16.332 "trsvcid": "4420", 00:17:16.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:16.332 "hdgst": false, 00:17:16.332 "ddgst": false 00:17:16.332 }, 00:17:16.332 "method": "bdev_nvme_attach_controller" 00:17:16.332 }' 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:16.332 "params": { 00:17:16.332 "name": "Nvme1", 00:17:16.332 "trtype": "tcp", 00:17:16.332 "traddr": "10.0.0.2", 00:17:16.332 "adrfam": "ipv4", 00:17:16.332 "trsvcid": "4420", 00:17:16.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:16.332 "hdgst": false, 00:17:16.332 "ddgst": false 00:17:16.332 }, 00:17:16.332 "method": "bdev_nvme_attach_controller" 00:17:16.332 }' 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:16.332 "params": { 00:17:16.332 "name": "Nvme1", 00:17:16.332 "trtype": "tcp", 00:17:16.332 "traddr": "10.0.0.2", 00:17:16.332 "adrfam": "ipv4", 00:17:16.332 "trsvcid": "4420", 00:17:16.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:16.332 "hdgst": false, 00:17:16.332 "ddgst": false 00:17:16.332 }, 00:17:16.332 "method": "bdev_nvme_attach_controller" 00:17:16.332 }' 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:16.332 17:52:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:16.332 "params": { 00:17:16.332 "name": "Nvme1", 00:17:16.332 "trtype": "tcp", 00:17:16.332 "traddr": "10.0.0.2", 00:17:16.332 "adrfam": "ipv4", 00:17:16.332 "trsvcid": "4420", 00:17:16.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:16.332 "hdgst": false, 00:17:16.332 "ddgst": false 00:17:16.332 }, 00:17:16.332 "method": "bdev_nvme_attach_controller" 00:17:16.332 }' 00:17:16.332 [2024-07-20 17:52:50.913418] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:16.332 [2024-07-20 17:52:50.913422] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:16.332 [2024-07-20 17:52:50.913422] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:16.332 [2024-07-20 17:52:50.913423] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:16.332 [2024-07-20 17:52:50.913501] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-20 17:52:50.913503] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-20 17:52:50.913504] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-20 17:52:50.913504] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:16.332 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:16.332 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:16.332 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:16.332 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.332 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.332 [2024-07-20 17:52:51.092090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.590 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.590 [2024-07-20 17:52:51.168595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:16.590 [2024-07-20 17:52:51.195949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.590 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.590 [2024-07-20 17:52:51.270384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:16.590 [2024-07-20 17:52:51.294810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.590 [2024-07-20 17:52:51.372856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.590 [2024-07-20 17:52:51.374622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:16.847 [2024-07-20 17:52:51.444506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:16.847 Running I/O for 1 seconds... 00:17:16.847 Running I/O for 1 seconds... 00:17:17.104 Running I/O for 1 seconds... 00:17:17.104 Running I/O for 1 seconds... 00:17:18.037 00:17:18.037 Latency(us) 00:17:18.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.037 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:18.037 Nvme1n1 : 1.01 11438.50 44.68 0.00 0.00 11152.58 7670.14 17185.00 00:17:18.037 =================================================================================================================== 00:17:18.037 Total : 11438.50 44.68 0.00 0.00 11152.58 7670.14 17185.00 00:17:18.037 00:17:18.037 Latency(us) 00:17:18.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.037 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:18.037 Nvme1n1 : 1.00 181920.87 710.63 0.00 0.00 700.83 271.55 1001.24 00:17:18.037 =================================================================================================================== 00:17:18.037 Total : 181920.87 710.63 0.00 0.00 700.83 271.55 1001.24 00:17:18.037 00:17:18.037 Latency(us) 00:17:18.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.037 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:18.037 Nvme1n1 : 1.02 7346.25 28.70 0.00 0.00 17271.11 5145.79 26796.94 00:17:18.037 =================================================================================================================== 00:17:18.037 Total : 7346.25 28.70 0.00 0.00 17271.11 5145.79 26796.94 00:17:18.037 00:17:18.037 Latency(us) 00:17:18.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.037 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:18.037 Nvme1n1 : 1.01 6891.50 26.92 0.00 0.00 18549.45 3349.62 23107.51 00:17:18.037 =================================================================================================================== 00:17:18.037 Total : 6891.50 26.92 0.00 0.00 18549.45 3349.62 23107.51 00:17:18.295 17:52:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 933223 00:17:18.295 17:52:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 933225 00:17:18.295 17:52:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 933228 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:18.295 rmmod nvme_tcp 00:17:18.295 rmmod nvme_fabrics 00:17:18.295 rmmod nvme_keyring 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 933193 ']' 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 933193 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 933193 ']' 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 933193 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:18.295 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 933193 00:17:18.554 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:18.554 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:18.554 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 933193' 00:17:18.554 killing process with pid 933193 00:17:18.554 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 933193 00:17:18.554 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 933193 00:17:18.554 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:18.554 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:18.554 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:18.554 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:18.554 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:18.554 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.554 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.554 17:52:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.083 17:52:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:21.083 00:17:21.083 real 0m7.181s 00:17:21.083 user 0m16.148s 00:17:21.083 sys 0m3.328s 00:17:21.083 17:52:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:21.083 17:52:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:21.083 ************************************ 00:17:21.083 END TEST nvmf_bdev_io_wait 00:17:21.083 ************************************ 00:17:21.083 17:52:55 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:21.083 17:52:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:21.083 17:52:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:21.083 17:52:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:21.083 ************************************ 00:17:21.083 START TEST nvmf_queue_depth 00:17:21.083 ************************************ 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:21.083 * Looking for test storage... 00:17:21.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.083 17:52:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:21.084 17:52:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:22.984 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:22.984 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:22.984 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:22.984 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:22.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:17:22.984 00:17:22.984 --- 10.0.0.2 ping statistics --- 00:17:22.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.984 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:17:22.984 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:22.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:17:22.985 00:17:22.985 --- 10.0.0.1 ping statistics --- 00:17:22.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.985 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=935442 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 935442 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 935442 ']' 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:22.985 [2024-07-20 17:52:57.519776] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:22.985 [2024-07-20 17:52:57.519873] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.985 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.985 [2024-07-20 17:52:57.581524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.985 [2024-07-20 17:52:57.665544] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.985 [2024-07-20 17:52:57.665609] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.985 [2024-07-20 17:52:57.665622] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.985 [2024-07-20 17:52:57.665634] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.985 [2024-07-20 17:52:57.665643] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.985 [2024-07-20 17:52:57.665668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:22.985 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:23.242 17:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:23.243 [2024-07-20 17:52:57.798757] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:23.243 Malloc0 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:23.243 [2024-07-20 17:52:57.859470] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=935527 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 935527 /var/tmp/bdevperf.sock 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 935527 ']' 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:23.243 17:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:23.243 [2024-07-20 17:52:57.907670] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:23.243 [2024-07-20 17:52:57.907750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935527 ] 00:17:23.243 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.243 [2024-07-20 17:52:57.972207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.499 [2024-07-20 17:52:58.063499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.499 17:52:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:23.499 17:52:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:23.499 17:52:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:23.499 17:52:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.499 17:52:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:23.756 NVMe0n1 00:17:23.756 17:52:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.756 17:52:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:23.756 Running I/O for 10 seconds... 00:17:36.028 00:17:36.028 Latency(us) 00:17:36.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.028 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:36.028 Verification LBA range: start 0x0 length 0x4000 00:17:36.028 NVMe0n1 : 10.10 8498.94 33.20 0.00 0.00 119910.82 24660.95 75730.49 00:17:36.028 =================================================================================================================== 00:17:36.028 Total : 8498.94 33.20 0.00 0.00 119910.82 24660.95 75730.49 00:17:36.028 0 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 935527 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 935527 ']' 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 935527 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 935527 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 935527' 00:17:36.028 killing process with pid 935527 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 935527 00:17:36.028 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.028 00:17:36.028 Latency(us) 00:17:36.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.028 =================================================================================================================== 00:17:36.028 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 935527 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:36.028 rmmod nvme_tcp 00:17:36.028 rmmod nvme_fabrics 00:17:36.028 rmmod nvme_keyring 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 935442 ']' 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 935442 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 935442 ']' 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 935442 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 935442 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 935442' 00:17:36.028 killing process with pid 935442 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 935442 00:17:36.028 17:53:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 935442 00:17:36.028 17:53:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:36.028 17:53:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:36.028 17:53:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:36.028 17:53:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:36.028 17:53:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:36.028 17:53:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.028 17:53:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.028 17:53:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.595 17:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:36.595 00:17:36.595 real 0m15.874s 00:17:36.595 user 0m22.575s 00:17:36.595 sys 0m2.962s 00:17:36.595 17:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:36.595 17:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:36.595 ************************************ 00:17:36.595 END TEST nvmf_queue_depth 00:17:36.595 ************************************ 00:17:36.595 17:53:11 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:36.595 17:53:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:36.595 17:53:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:36.595 17:53:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:36.595 ************************************ 00:17:36.595 START TEST nvmf_target_multipath 00:17:36.595 ************************************ 00:17:36.595 17:53:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:36.853 * Looking for test storage... 00:17:36.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.853 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:36.854 17:53:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:38.788 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:38.789 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:38.789 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:38.789 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:38.789 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:38.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:17:38.789 00:17:38.789 --- 10.0.0.2 ping statistics --- 00:17:38.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.789 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:17:38.789 00:17:38.789 --- 10.0.0.1 ping statistics --- 00:17:38.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.789 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:38.789 only one NIC for nvmf test 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:38.789 rmmod nvme_tcp 00:17:38.789 rmmod nvme_fabrics 00:17:38.789 rmmod nvme_keyring 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.789 17:53:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:41.318 00:17:41.318 real 0m4.293s 00:17:41.318 user 0m0.846s 00:17:41.318 sys 0m1.450s 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:41.318 17:53:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:41.318 ************************************ 00:17:41.318 END TEST nvmf_target_multipath 00:17:41.318 ************************************ 00:17:41.318 17:53:15 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:41.318 17:53:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:41.318 17:53:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:41.318 17:53:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:41.318 ************************************ 00:17:41.318 START TEST nvmf_zcopy 00:17:41.318 ************************************ 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:41.318 * Looking for test storage... 00:17:41.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.318 17:53:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:41.319 17:53:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:43.221 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:43.221 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:43.221 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:43.221 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:43.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:17:43.221 00:17:43.221 --- 10.0.0.2 ping statistics --- 00:17:43.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.221 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:17:43.221 00:17:43.221 --- 10.0.0.1 ping statistics --- 00:17:43.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.221 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=940627 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 940627 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 940627 ']' 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.221 17:53:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:43.222 17:53:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.222 [2024-07-20 17:53:17.920838] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:43.222 [2024-07-20 17:53:17.920922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.222 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.222 [2024-07-20 17:53:17.984685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.481 [2024-07-20 17:53:18.074272] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.481 [2024-07-20 17:53:18.074334] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.481 [2024-07-20 17:53:18.074348] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.481 [2024-07-20 17:53:18.074359] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.481 [2024-07-20 17:53:18.074369] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.481 [2024-07-20 17:53:18.074396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.481 [2024-07-20 17:53:18.219072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.481 [2024-07-20 17:53:18.235314] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.481 malloc0 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:43.481 { 00:17:43.481 "params": { 00:17:43.481 "name": "Nvme$subsystem", 00:17:43.481 "trtype": "$TEST_TRANSPORT", 00:17:43.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.481 "adrfam": "ipv4", 00:17:43.481 "trsvcid": "$NVMF_PORT", 00:17:43.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.481 "hdgst": ${hdgst:-false}, 00:17:43.481 "ddgst": ${ddgst:-false} 00:17:43.481 }, 00:17:43.481 "method": "bdev_nvme_attach_controller" 00:17:43.481 } 00:17:43.481 EOF 00:17:43.481 )") 00:17:43.481 17:53:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:43.740 17:53:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:43.740 17:53:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:43.740 17:53:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:43.740 "params": { 00:17:43.740 "name": "Nvme1", 00:17:43.740 "trtype": "tcp", 00:17:43.740 "traddr": "10.0.0.2", 00:17:43.740 "adrfam": "ipv4", 00:17:43.740 "trsvcid": "4420", 00:17:43.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.740 "hdgst": false, 00:17:43.740 "ddgst": false 00:17:43.740 }, 00:17:43.740 "method": "bdev_nvme_attach_controller" 00:17:43.740 }' 00:17:43.740 [2024-07-20 17:53:18.312940] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:43.740 [2024-07-20 17:53:18.313026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940660 ] 00:17:43.740 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.740 [2024-07-20 17:53:18.379672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.740 [2024-07-20 17:53:18.473968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.304 Running I/O for 10 seconds... 00:17:54.264 00:17:54.264 Latency(us) 00:17:54.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.264 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:54.264 Verification LBA range: start 0x0 length 0x1000 00:17:54.264 Nvme1n1 : 10.03 3880.06 30.31 0.00 0.00 32919.71 3786.52 68739.98 00:17:54.264 =================================================================================================================== 00:17:54.264 Total : 3880.06 30.31 0.00 0.00 32919.71 3786.52 68739.98 00:17:54.522 17:53:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=941959 00:17:54.522 17:53:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:54.522 17:53:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:54.522 17:53:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:54.522 17:53:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:54.522 17:53:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:54.522 17:53:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:54.522 17:53:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:54.522 17:53:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:54.522 { 00:17:54.522 "params": { 00:17:54.522 "name": "Nvme$subsystem", 00:17:54.522 "trtype": "$TEST_TRANSPORT", 00:17:54.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.523 "adrfam": "ipv4", 00:17:54.523 "trsvcid": "$NVMF_PORT", 00:17:54.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.523 "hdgst": ${hdgst:-false}, 00:17:54.523 "ddgst": ${ddgst:-false} 00:17:54.523 }, 00:17:54.523 "method": "bdev_nvme_attach_controller" 00:17:54.523 } 00:17:54.523 EOF 00:17:54.523 )") 00:17:54.523 17:53:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:54.523 [2024-07-20 17:53:29.107708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.107746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 17:53:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:54.523 17:53:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:54.523 17:53:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:54.523 "params": { 00:17:54.523 "name": "Nvme1", 00:17:54.523 "trtype": "tcp", 00:17:54.523 "traddr": "10.0.0.2", 00:17:54.523 "adrfam": "ipv4", 00:17:54.523 "trsvcid": "4420", 00:17:54.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.523 "hdgst": false, 00:17:54.523 "ddgst": false 00:17:54.523 }, 00:17:54.523 "method": "bdev_nvme_attach_controller" 00:17:54.523 }' 00:17:54.523 [2024-07-20 17:53:29.115670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.115692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.123690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.123710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.131711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.131731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.139734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.139753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.141039] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:54.523 [2024-07-20 17:53:29.141113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941959 ] 00:17:54.523 [2024-07-20 17:53:29.147754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.147788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.155798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.155819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.163820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.163854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.523 [2024-07-20 17:53:29.171845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.171866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.179879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.179901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.187896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.187918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.195916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.195938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.200883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.523 [2024-07-20 17:53:29.203932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.203953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.211983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.212018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.219971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.219994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.227985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.228006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.236013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.236034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.244028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.244050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.252086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.252117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.260114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.260159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.268113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.268148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.276130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.276151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.284162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.284196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.291289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.523 [2024-07-20 17:53:29.292168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.292188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.300204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.300223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.308248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.308278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.523 [2024-07-20 17:53:29.316304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.523 [2024-07-20 17:53:29.316343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.324293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.324325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.332314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.332350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.340333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.340369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.348357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.348393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.356377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.356410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.364372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.364393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.372421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.372455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.380441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.380475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.388437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.388457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.396457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.396478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.404491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.404529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.412532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.412557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.420548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.420571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.428554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.428577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.436592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.436617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.444619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.444641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.452641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.452661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.460662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.460682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.468685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.468705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.476709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.476729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.484736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.484759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.492754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.492775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.500775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.500801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.508802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.508824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.516830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.516866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.524869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.524892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.532885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.532908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.540905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.540928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.548924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.548946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.556945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.556966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.564967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.564988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.782 [2024-07-20 17:53:29.572995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.782 [2024-07-20 17:53:29.573018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.580997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.581018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.589132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.589172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 Running I/O for 5 seconds... 00:17:55.040 [2024-07-20 17:53:29.597149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.597185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.611567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.611595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.626236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.626265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.640301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.640329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.652693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.652722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.663571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.663599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.672944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.672972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.687924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.687952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.697966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.697994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.712881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.712909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.723268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.723295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.735586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.735613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.745754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.745782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.761689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.761716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.771329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.771355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.785455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.785482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.795427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.795454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.806820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.806857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.817996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.818023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.040 [2024-07-20 17:53:29.833370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.040 [2024-07-20 17:53:29.833397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:29.846356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:29.846381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:29.855832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:29.855861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:29.871724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:29.871753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:29.883913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:29.883941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:29.897166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:29.897193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:29.908837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:29.908864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:29.923962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:29.923991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:29.933899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:29.933927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:29.947304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:29.947331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:29.957573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:29.957600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:29.970251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:29.970279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:29.983807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:29.983835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:29.995353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:29.995380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:30.011108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:30.011143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:30.025604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:30.025638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:30.039535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:30.039564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:30.052517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:30.052555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:30.064591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:30.064617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:30.075282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:30.075323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.298 [2024-07-20 17:53:30.086689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.298 [2024-07-20 17:53:30.086716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.098705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.098732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.111010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.111039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.121284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.121311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.131766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.131815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.143597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.143623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.157561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.157587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.168318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.168346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.179490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.179517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.191307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.191334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.202028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.202056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.214236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.214264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.224865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.224893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.237059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.237096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.246996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.247025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.260284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.260310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.269701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.269735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.281623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.281650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.291418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.291444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.304070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.304117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.314917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.314945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.328597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.328624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.556 [2024-07-20 17:53:30.340181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.556 [2024-07-20 17:53:30.340207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.354564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.354592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.366435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.366461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.376426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.376453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.389111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.389153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.398632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.398658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.410964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.410991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.422917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.422945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.436338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.436365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.446202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.446228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.458489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.458516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.468734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.468761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.478024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.478052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.490286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.490320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.500244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.500271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.511572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.511598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.523936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.523964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.533436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.533463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.548689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.548716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.559721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.559748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.572258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.572285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.586577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.586604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.815 [2024-07-20 17:53:30.598754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.815 [2024-07-20 17:53:30.598808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.073 [2024-07-20 17:53:30.615314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.073 [2024-07-20 17:53:30.615341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.073 [2024-07-20 17:53:30.625996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.073 [2024-07-20 17:53:30.626023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.073 [2024-07-20 17:53:30.639274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.073 [2024-07-20 17:53:30.639302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.073 [2024-07-20 17:53:30.651512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.073 [2024-07-20 17:53:30.651540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.073 [2024-07-20 17:53:30.662191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.073 [2024-07-20 17:53:30.662218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.073 [2024-07-20 17:53:30.677857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.073 [2024-07-20 17:53:30.677894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.073 [2024-07-20 17:53:30.688252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.073 [2024-07-20 17:53:30.688293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.073 [2024-07-20 17:53:30.700511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.073 [2024-07-20 17:53:30.700539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.073 [2024-07-20 17:53:30.711672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.073 [2024-07-20 17:53:30.711699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.073 [2024-07-20 17:53:30.726904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.073 [2024-07-20 17:53:30.726931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.073 [2024-07-20 17:53:30.737215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.073 [2024-07-20 17:53:30.737243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.074 [2024-07-20 17:53:30.754155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.074 [2024-07-20 17:53:30.754198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.074 [2024-07-20 17:53:30.763951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.074 [2024-07-20 17:53:30.763980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.074 [2024-07-20 17:53:30.778550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.074 [2024-07-20 17:53:30.778578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.074 [2024-07-20 17:53:30.792056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.074 [2024-07-20 17:53:30.792098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.074 [2024-07-20 17:53:30.803265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.074 [2024-07-20 17:53:30.803295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.074 [2024-07-20 17:53:30.819229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.074 [2024-07-20 17:53:30.819256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.074 [2024-07-20 17:53:30.828034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.074 [2024-07-20 17:53:30.828062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.074 [2024-07-20 17:53:30.841741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.074 [2024-07-20 17:53:30.841769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.074 [2024-07-20 17:53:30.857040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.074 [2024-07-20 17:53:30.857068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:30.871683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:30.871712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:30.886411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:30.886438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:30.897212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:30.897239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:30.911834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:30.911870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:30.927385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:30.927412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:30.937448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:30.937475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:30.950527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:30.950555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:30.962190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:30.962218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:30.971727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:30.971754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:30.986200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:30.986227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:30.998442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:30.998469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:31.011787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:31.011828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:31.024246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:31.024273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:31.036700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:31.036728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:31.048139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:31.048167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:31.060640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:31.060667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:31.070076] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:31.070118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:31.081280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:31.081307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:31.091735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:31.091762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:31.107216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:31.107243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.332 [2024-07-20 17:53:31.122057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.332 [2024-07-20 17:53:31.122100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.131858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.131886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.146448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.146474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.158566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.158591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.169344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.169370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.184119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.184146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.197426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.197452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.207253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.207280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.222128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.222169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.233990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.234018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.249213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.249240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.260371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.260396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.272546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.272573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.284367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.284394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.296920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.590 [2024-07-20 17:53:31.296948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.590 [2024-07-20 17:53:31.309232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.591 [2024-07-20 17:53:31.309259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.591 [2024-07-20 17:53:31.320853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.591 [2024-07-20 17:53:31.320881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.591 [2024-07-20 17:53:31.333391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.591 [2024-07-20 17:53:31.333418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.591 [2024-07-20 17:53:31.343225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.591 [2024-07-20 17:53:31.343252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.591 [2024-07-20 17:53:31.356381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.591 [2024-07-20 17:53:31.356407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.591 [2024-07-20 17:53:31.367718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.591 [2024-07-20 17:53:31.367745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.591 [2024-07-20 17:53:31.380573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.591 [2024-07-20 17:53:31.380600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.390238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.390266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.403456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.403483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.416579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.416606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.430045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.430073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.443008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.443037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.453248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.453276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.463047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.463074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.475407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.475435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.489246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.489274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.498853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.498881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.511659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.511685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.522408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.522447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.533116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.533141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.543728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.543756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.555071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.555099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.566846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.566874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.579976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.580004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.591713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.591739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.607194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.607223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.621810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.621837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.849 [2024-07-20 17:53:31.640602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.849 [2024-07-20 17:53:31.640629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.650996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.651025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.665176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.665212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.674650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.674676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.689010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.689038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.699673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.699700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.712370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.712397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.725575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.725603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.738105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.738132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.751968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.751996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.764109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.764136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.774951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.774978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.790883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.790910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.802734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.802758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.815443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.815470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.829541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.829568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.843732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.843757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.854833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.854860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.868778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.868828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.881138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.881165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.108 [2024-07-20 17:53:31.893156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.108 [2024-07-20 17:53:31.893196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:31.904142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:31.904188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:31.918568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:31.918595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:31.931867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:31.931895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:31.941553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:31.941589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:31.956013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:31.956041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:31.967183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:31.967209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:31.980015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:31.980042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:31.991916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:31.991944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:32.004930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:32.004959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:32.018544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:32.018570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:32.030970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:32.030996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:32.040770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:32.040821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:32.057260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:32.057287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:32.070802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:32.070830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:32.080532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:32.080558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:32.092224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:32.092250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:32.103659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:32.103685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:32.120144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:32.120186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:32.132085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:32.132110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:32.142201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:32.142238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.366 [2024-07-20 17:53:32.155475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.366 [2024-07-20 17:53:32.155502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.166252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.166277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.175763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.175811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.190440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.190467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.200738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.200765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.216515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.216543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.228639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.228666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.241493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.241519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.252107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.252131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.262820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.262861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.274765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.274816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.284948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.284976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.299007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.299035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.309166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.309193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.321142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.321166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.333513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.333540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.349549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.349576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.359946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.359974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.374771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.374832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.384710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.384752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.399760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.399811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.634 [2024-07-20 17:53:32.409394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.634 [2024-07-20 17:53:32.409421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.423084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.423113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.435738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.435765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.446741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.446768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.460253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.460281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.471071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.471113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.484271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.484298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.493924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.493952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.508535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.508562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.520050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.520091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.530308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.530335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.543532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.543563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.554789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.554844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.565701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.565728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.579111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.579140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.594126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.594154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.604710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.604750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.618806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.618858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.629404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.629431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.642436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.642464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.652165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.652192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.665234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.665262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.677989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.678028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.690558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.690585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.705431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.705458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.715112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.715140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.942 [2024-07-20 17:53:32.729228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.942 [2024-07-20 17:53:32.729255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.200 [2024-07-20 17:53:32.741295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.200 [2024-07-20 17:53:32.741326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.200 [2024-07-20 17:53:32.751264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.200 [2024-07-20 17:53:32.751295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.200 [2024-07-20 17:53:32.763234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.200 [2024-07-20 17:53:32.763262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.200 [2024-07-20 17:53:32.773136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.200 [2024-07-20 17:53:32.773163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.200 [2024-07-20 17:53:32.788222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.200 [2024-07-20 17:53:32.788255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.200 [2024-07-20 17:53:32.803399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.200 [2024-07-20 17:53:32.803424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.200 [2024-07-20 17:53:32.813899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.200 [2024-07-20 17:53:32.813927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.200 [2024-07-20 17:53:32.829113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.200 [2024-07-20 17:53:32.829141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.200 [2024-07-20 17:53:32.840584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.200 [2024-07-20 17:53:32.840611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.200 [2024-07-20 17:53:32.854800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.200 [2024-07-20 17:53:32.854827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.200 [2024-07-20 17:53:32.864617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.200 [2024-07-20 17:53:32.864644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.200 [2024-07-20 17:53:32.877114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.201 [2024-07-20 17:53:32.877142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.201 [2024-07-20 17:53:32.886990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.201 [2024-07-20 17:53:32.887017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.201 [2024-07-20 17:53:32.899156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.201 [2024-07-20 17:53:32.899183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.201 [2024-07-20 17:53:32.910692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.201 [2024-07-20 17:53:32.910719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.201 [2024-07-20 17:53:32.922088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.201 [2024-07-20 17:53:32.922116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.201 [2024-07-20 17:53:32.933214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.201 [2024-07-20 17:53:32.933242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.201 [2024-07-20 17:53:32.945732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.201 [2024-07-20 17:53:32.945773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.201 [2024-07-20 17:53:32.959239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.201 [2024-07-20 17:53:32.959266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.201 [2024-07-20 17:53:32.969379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.201 [2024-07-20 17:53:32.969420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.201 [2024-07-20 17:53:32.984019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.201 [2024-07-20 17:53:32.984048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.201 [2024-07-20 17:53:32.993628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.201 [2024-07-20 17:53:32.993655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.008562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.008589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.019650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.019694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.030412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.030439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.041923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.041952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.056419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.056447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.069388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.069415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.085813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.085856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.098250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.098285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.109709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.109736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.122247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.122274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.137304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.137332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.148124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.148152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.159100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.159128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.173588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.173616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.186594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.186621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.196180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.196208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.209355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.209382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.220744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.220772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.232814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.232853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.458 [2024-07-20 17:53:33.244365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.458 [2024-07-20 17:53:33.244393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.257259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.257288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.268508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.268537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.277723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.277752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.291908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.291936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.301216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.301244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.313844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.313871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.326205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.326233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.339340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.339369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.349925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.349952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.359840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.359881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.375531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.375559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.386803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.386831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.397625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.397656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.407504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.407532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.419761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.419789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.431427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.431454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.443622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.443650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.454149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.454178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.468009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.468036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.479608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.479636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.490182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.490209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.716 [2024-07-20 17:53:33.501605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.716 [2024-07-20 17:53:33.501632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.973 [2024-07-20 17:53:33.514185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.973 [2024-07-20 17:53:33.514220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.973 [2024-07-20 17:53:33.523598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.973 [2024-07-20 17:53:33.523626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.973 [2024-07-20 17:53:33.536411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.973 [2024-07-20 17:53:33.536438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.546365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.546392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.558696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.558724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.569034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.569061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.583805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.583832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.593085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.593113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.606500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.606527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.616961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.616989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.631347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.631375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.641447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.641474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.655282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.655309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.668437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.668464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.680507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.680536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.690044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.690072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.702712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.702740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.715728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.715759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.725131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.725158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.740479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.740518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.750396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.750426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.974 [2024-07-20 17:53:33.762541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.974 [2024-07-20 17:53:33.762569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.772416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.772443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.785570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.785596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.799750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.799802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.809219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.809260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.824938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.824965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.836787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.836823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.849295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.849323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.859701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.859728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.872655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.872682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.883327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.883354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.895451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.895478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.907211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.907238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.919845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.919873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.930951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.930979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.944552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.944579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.955202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.955228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.968723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.968758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.978505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.978533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:33.990233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:33.990260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:34.002084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:34.002126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:34.014961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:34.014990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.243 [2024-07-20 17:53:34.027843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.243 [2024-07-20 17:53:34.027871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.039690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.039717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.054383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.054410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.066122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.066149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.079516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.079544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.093023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.093051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.107361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.107388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.118984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.119012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.133211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.133238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.144048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.144091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.156315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.156345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.168982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.169010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.180563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.180589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.191339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.191371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.201688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.201722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.215965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.215992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.226014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.226042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.239870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.239914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.251440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.251467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.266577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.266604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.277425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.277452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.501 [2024-07-20 17:53:34.287096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.501 [2024-07-20 17:53:34.287123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.299778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.299813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.312456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.312483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.324309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.324340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.339262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.339290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.348787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.348823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.361535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.361562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.376681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.376708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.392502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.392529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.404036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.404062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.415666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.415693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.425964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.425991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.436208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.436236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.446826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.446854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.458957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.458984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.470423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.470450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.479989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.480017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.493357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.493384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.502909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.502936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.517235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.517263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.532396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.532423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.759 [2024-07-20 17:53:34.546081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.759 [2024-07-20 17:53:34.546123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.556486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.556516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.569019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.569048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.581683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.581710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.594464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.594491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.606687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.606729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.618142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.618169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 00:18:00.017 Latency(us) 00:18:00.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.017 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:00.017 Nvme1n1 : 5.01 10400.79 81.26 0.00 0.00 12286.38 4490.43 26796.94 00:18:00.017 =================================================================================================================== 00:18:00.017 Total : 10400.79 81.26 0.00 0.00 12286.38 4490.43 26796.94 00:18:00.017 [2024-07-20 17:53:34.624931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.624956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.632940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.632967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.640991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.641026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.649038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.649087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.657053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.657115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.665069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.665113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.673088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.673134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.681113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.681158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.689142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.689188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.697173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.697219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.705183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.705227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.713227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.713274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.721228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.721270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.729258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.729302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.737279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.737320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.745293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.745340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.753313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.753359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.761329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.761367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.769322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.769349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.777376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.777414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.785420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.785466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.793422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.793463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.017 [2024-07-20 17:53:34.801426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.017 [2024-07-20 17:53:34.801463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.018 [2024-07-20 17:53:34.809432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.018 [2024-07-20 17:53:34.809461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.275 [2024-07-20 17:53:34.817495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.275 [2024-07-20 17:53:34.817537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.275 [2024-07-20 17:53:34.825508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.275 [2024-07-20 17:53:34.825549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.275 [2024-07-20 17:53:34.833514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.275 [2024-07-20 17:53:34.833550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.275 [2024-07-20 17:53:34.841507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.275 [2024-07-20 17:53:34.841532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.275 [2024-07-20 17:53:34.849529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.275 [2024-07-20 17:53:34.849554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (941959) - No such process 00:18:00.275 17:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 941959 00:18:00.275 17:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:00.275 17:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.275 17:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:00.276 17:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.276 17:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:00.276 17:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.276 17:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:00.276 delay0 00:18:00.276 17:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.276 17:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:00.276 17:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.276 17:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:00.276 17:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.276 17:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:00.276 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.276 [2024-07-20 17:53:34.936989] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:06.828 Initializing NVMe Controllers 00:18:06.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:06.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:06.828 Initialization complete. Launching workers. 00:18:06.828 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 70 00:18:06.828 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 357, failed to submit 33 00:18:06.828 success 137, unsuccess 220, failed 0 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.828 rmmod nvme_tcp 00:18:06.828 rmmod nvme_fabrics 00:18:06.828 rmmod nvme_keyring 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 940627 ']' 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 940627 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 940627 ']' 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 940627 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 940627 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 940627' 00:18:06.828 killing process with pid 940627 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 940627 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 940627 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.828 17:53:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.353 17:53:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:09.353 00:18:09.353 real 0m27.962s 00:18:09.353 user 0m41.381s 00:18:09.353 sys 0m8.248s 00:18:09.353 17:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:09.353 17:53:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:09.353 ************************************ 00:18:09.353 END TEST nvmf_zcopy 00:18:09.353 ************************************ 00:18:09.353 17:53:43 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:09.353 17:53:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:09.353 17:53:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:09.353 17:53:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:09.353 ************************************ 00:18:09.353 START TEST nvmf_nmic 00:18:09.353 ************************************ 00:18:09.353 17:53:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:09.353 * Looking for test storage... 00:18:09.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:09.354 17:53:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:11.253 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:11.253 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:11.253 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.253 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:11.254 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:11.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:18:11.254 00:18:11.254 --- 10.0.0.2 ping statistics --- 00:18:11.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.254 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:18:11.254 00:18:11.254 --- 10.0.0.1 ping statistics --- 00:18:11.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.254 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=945338 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 945338 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 945338 ']' 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:11.254 17:53:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.254 [2024-07-20 17:53:45.959193] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:11.254 [2024-07-20 17:53:45.959286] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.254 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.254 [2024-07-20 17:53:46.026090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:11.513 [2024-07-20 17:53:46.118751] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.513 [2024-07-20 17:53:46.118810] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.513 [2024-07-20 17:53:46.118840] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.513 [2024-07-20 17:53:46.118851] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.513 [2024-07-20 17:53:46.118861] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.513 [2024-07-20 17:53:46.118990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.513 [2024-07-20 17:53:46.119145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.513 [2024-07-20 17:53:46.119210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.513 [2024-07-20 17:53:46.119214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.513 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:11.513 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:11.513 17:53:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.513 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.513 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.513 17:53:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.513 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:11.513 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.513 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.513 [2024-07-20 17:53:46.284547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.513 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.513 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:11.513 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.513 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.770 Malloc0 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.770 [2024-07-20 17:53:46.338313] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:11.770 test case1: single bdev can't be used in multiple subsystems 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:11.770 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:11.771 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.771 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.771 [2024-07-20 17:53:46.362170] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:11.771 [2024-07-20 17:53:46.362198] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:11.771 [2024-07-20 17:53:46.362227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.771 request: 00:18:11.771 { 00:18:11.771 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:11.771 "namespace": { 00:18:11.771 "bdev_name": "Malloc0", 00:18:11.771 "no_auto_visible": false 00:18:11.771 }, 00:18:11.771 "method": "nvmf_subsystem_add_ns", 00:18:11.771 "req_id": 1 00:18:11.771 } 00:18:11.771 Got JSON-RPC error response 00:18:11.771 response: 00:18:11.771 { 00:18:11.771 "code": -32602, 00:18:11.771 "message": "Invalid parameters" 00:18:11.771 } 00:18:11.771 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:11.771 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:11.771 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:11.771 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:11.771 Adding namespace failed - expected result. 00:18:11.771 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:11.771 test case2: host connect to nvmf target in multiple paths 00:18:11.771 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:11.771 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.771 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:11.771 [2024-07-20 17:53:46.370278] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:11.771 17:53:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.771 17:53:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.336 17:53:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:12.902 17:53:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:12.902 17:53:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:12.902 17:53:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.902 17:53:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:12.902 17:53:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:14.798 17:53:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:14.798 17:53:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:14.798 17:53:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:15.056 17:53:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:15.056 17:53:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:15.056 17:53:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:15.056 17:53:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:15.056 [global] 00:18:15.056 thread=1 00:18:15.056 invalidate=1 00:18:15.056 rw=write 00:18:15.056 time_based=1 00:18:15.056 runtime=1 00:18:15.056 ioengine=libaio 00:18:15.056 direct=1 00:18:15.056 bs=4096 00:18:15.056 iodepth=1 00:18:15.056 norandommap=0 00:18:15.056 numjobs=1 00:18:15.056 00:18:15.056 verify_dump=1 00:18:15.056 verify_backlog=512 00:18:15.056 verify_state_save=0 00:18:15.056 do_verify=1 00:18:15.056 verify=crc32c-intel 00:18:15.056 [job0] 00:18:15.056 filename=/dev/nvme0n1 00:18:15.056 Could not set queue depth (nvme0n1) 00:18:15.056 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.056 fio-3.35 00:18:15.056 Starting 1 thread 00:18:16.428 00:18:16.428 job0: (groupid=0, jobs=1): err= 0: pid=945845: Sat Jul 20 17:53:50 2024 00:18:16.428 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:16.428 slat (nsec): min=6458, max=28096, avg=8286.69, stdev=2087.78 00:18:16.428 clat (usec): min=429, max=852, avg=543.56, stdev=59.11 00:18:16.428 lat (usec): min=438, max=860, avg=551.85, stdev=58.97 00:18:16.428 clat percentiles (usec): 00:18:16.428 | 1.00th=[ 437], 5.00th=[ 453], 10.00th=[ 461], 20.00th=[ 486], 00:18:16.428 | 30.00th=[ 502], 40.00th=[ 515], 50.00th=[ 553], 60.00th=[ 578], 00:18:16.428 | 70.00th=[ 586], 80.00th=[ 594], 90.00th=[ 611], 95.00th=[ 619], 00:18:16.428 | 99.00th=[ 668], 99.50th=[ 717], 99.90th=[ 807], 99.95th=[ 857], 00:18:16.428 | 99.99th=[ 857] 00:18:16.428 write: IOPS=1219, BW=4879KiB/s (4996kB/s)(4884KiB/1001msec); 0 zone resets 00:18:16.428 slat (nsec): min=7338, max=44956, avg=11913.12, stdev=4432.92 00:18:16.428 clat (usec): min=291, max=1668, avg=339.14, stdev=49.63 00:18:16.428 lat (usec): min=299, max=1691, avg=351.05, stdev=50.93 00:18:16.428 clat percentiles (usec): 00:18:16.428 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:18:16.429 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 343], 00:18:16.429 | 70.00th=[ 347], 80.00th=[ 351], 90.00th=[ 392], 95.00th=[ 400], 00:18:16.429 | 99.00th=[ 437], 99.50th=[ 474], 99.90th=[ 603], 99.95th=[ 1663], 00:18:16.429 | 99.99th=[ 1663] 00:18:16.429 bw ( KiB/s): min= 5368, max= 5368, per=100.00%, avg=5368.00, stdev= 0.00, samples=1 00:18:16.429 iops : min= 1342, max= 1342, avg=1342.00, stdev= 0.00, samples=1 00:18:16.429 lat (usec) : 500=66.95%, 750=32.87%, 1000=0.13% 00:18:16.429 lat (msec) : 2=0.04% 00:18:16.429 cpu : usr=1.20%, sys=3.60%, ctx=2245, majf=0, minf=2 00:18:16.429 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.429 issued rwts: total=1024,1221,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.429 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.429 00:18:16.429 Run status group 0 (all jobs): 00:18:16.429 READ: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:18:16.429 WRITE: bw=4879KiB/s (4996kB/s), 4879KiB/s-4879KiB/s (4996kB/s-4996kB/s), io=4884KiB (5001kB), run=1001-1001msec 00:18:16.429 00:18:16.429 Disk stats (read/write): 00:18:16.429 nvme0n1: ios=1018/1024, merge=0/0, ticks=576/344, in_queue=920, util=92.89% 00:18:16.429 17:53:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:16.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:16.429 rmmod nvme_tcp 00:18:16.429 rmmod nvme_fabrics 00:18:16.429 rmmod nvme_keyring 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 945338 ']' 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 945338 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 945338 ']' 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 945338 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 945338 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 945338' 00:18:16.429 killing process with pid 945338 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 945338 00:18:16.429 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 945338 00:18:16.687 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:16.687 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:16.687 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:16.687 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.687 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:16.687 17:53:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.688 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.688 17:53:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.217 17:53:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:19.217 00:18:19.217 real 0m9.757s 00:18:19.217 user 0m21.764s 00:18:19.217 sys 0m2.325s 00:18:19.217 17:53:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:19.217 17:53:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:19.217 ************************************ 00:18:19.217 END TEST nvmf_nmic 00:18:19.217 ************************************ 00:18:19.217 17:53:53 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:19.217 17:53:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:19.217 17:53:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:19.217 17:53:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:19.217 ************************************ 00:18:19.217 START TEST nvmf_fio_target 00:18:19.217 ************************************ 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:19.217 * Looking for test storage... 00:18:19.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:19.217 17:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:20.590 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:20.591 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:20.591 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:20.591 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:20.591 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:20.591 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:20.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:18:20.848 00:18:20.848 --- 10.0.0.2 ping statistics --- 00:18:20.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.848 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:20.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:18:20.848 00:18:20.848 --- 10.0.0.1 ping statistics --- 00:18:20.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.848 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=947921 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 947921 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 947921 ']' 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:20.848 17:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.848 [2024-07-20 17:53:55.565730] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:20.848 [2024-07-20 17:53:55.565833] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.848 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.848 [2024-07-20 17:53:55.630225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:21.106 [2024-07-20 17:53:55.719483] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.106 [2024-07-20 17:53:55.719535] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.106 [2024-07-20 17:53:55.719549] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.106 [2024-07-20 17:53:55.719561] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.106 [2024-07-20 17:53:55.719570] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.106 [2024-07-20 17:53:55.719649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.106 [2024-07-20 17:53:55.719717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.106 [2024-07-20 17:53:55.719783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:21.106 [2024-07-20 17:53:55.719786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.106 17:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:21.106 17:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:21.106 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:21.106 17:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:21.106 17:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.106 17:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.106 17:53:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:21.363 [2024-07-20 17:53:56.091164] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.363 17:53:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:21.927 17:53:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:21.927 17:53:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:21.927 17:53:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:21.927 17:53:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:22.184 17:53:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:22.184 17:53:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:22.442 17:53:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:22.442 17:53:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:22.714 17:53:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:22.972 17:53:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:22.972 17:53:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:23.231 17:53:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:23.231 17:53:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:23.489 17:53:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:23.489 17:53:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:23.745 17:53:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:24.003 17:53:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:24.003 17:53:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:24.260 17:53:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:24.260 17:53:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:24.517 17:53:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.774 [2024-07-20 17:53:59.446417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.774 17:53:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:25.032 17:53:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:25.289 17:53:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:25.863 17:54:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:25.863 17:54:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:25.863 17:54:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.863 17:54:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:25.863 17:54:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:25.863 17:54:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:27.759 17:54:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:27.759 17:54:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:27.759 17:54:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:27.759 17:54:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:27.759 17:54:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.759 17:54:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:27.759 17:54:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:27.759 [global] 00:18:27.759 thread=1 00:18:27.759 invalidate=1 00:18:27.759 rw=write 00:18:27.759 time_based=1 00:18:27.759 runtime=1 00:18:27.759 ioengine=libaio 00:18:27.759 direct=1 00:18:27.759 bs=4096 00:18:27.759 iodepth=1 00:18:27.759 norandommap=0 00:18:27.759 numjobs=1 00:18:27.759 00:18:27.759 verify_dump=1 00:18:27.759 verify_backlog=512 00:18:27.759 verify_state_save=0 00:18:27.759 do_verify=1 00:18:27.759 verify=crc32c-intel 00:18:27.759 [job0] 00:18:27.759 filename=/dev/nvme0n1 00:18:27.759 [job1] 00:18:27.759 filename=/dev/nvme0n2 00:18:27.759 [job2] 00:18:27.759 filename=/dev/nvme0n3 00:18:27.759 [job3] 00:18:27.759 filename=/dev/nvme0n4 00:18:28.015 Could not set queue depth (nvme0n1) 00:18:28.015 Could not set queue depth (nvme0n2) 00:18:28.015 Could not set queue depth (nvme0n3) 00:18:28.015 Could not set queue depth (nvme0n4) 00:18:28.015 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:28.015 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:28.015 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:28.015 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:28.015 fio-3.35 00:18:28.015 Starting 4 threads 00:18:29.406 00:18:29.406 job0: (groupid=0, jobs=1): err= 0: pid=949102: Sat Jul 20 17:54:03 2024 00:18:29.406 read: IOPS=924, BW=3696KiB/s (3785kB/s)(3700KiB/1001msec) 00:18:29.406 slat (nsec): min=4891, max=57933, avg=16195.29, stdev=7272.39 00:18:29.406 clat (usec): min=419, max=798, avg=527.68, stdev=57.23 00:18:29.406 lat (usec): min=435, max=817, avg=543.87, stdev=58.77 00:18:29.406 clat percentiles (usec): 00:18:29.406 | 1.00th=[ 441], 5.00th=[ 465], 10.00th=[ 474], 20.00th=[ 482], 00:18:29.406 | 30.00th=[ 490], 40.00th=[ 502], 50.00th=[ 515], 60.00th=[ 529], 00:18:29.406 | 70.00th=[ 545], 80.00th=[ 570], 90.00th=[ 611], 95.00th=[ 644], 00:18:29.406 | 99.00th=[ 701], 99.50th=[ 742], 99.90th=[ 799], 99.95th=[ 799], 00:18:29.406 | 99.99th=[ 799] 00:18:29.406 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:29.406 slat (usec): min=6, max=22183, avg=40.02, stdev=692.73 00:18:29.406 clat (usec): min=309, max=1073, avg=436.06, stdev=63.85 00:18:29.406 lat (usec): min=318, max=22785, avg=476.08, stdev=700.79 00:18:29.406 clat percentiles (usec): 00:18:29.406 | 1.00th=[ 330], 5.00th=[ 355], 10.00th=[ 367], 20.00th=[ 388], 00:18:29.406 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[ 433], 60.00th=[ 445], 00:18:29.406 | 70.00th=[ 461], 80.00th=[ 482], 90.00th=[ 506], 95.00th=[ 529], 00:18:29.406 | 99.00th=[ 603], 99.50th=[ 693], 99.90th=[ 1020], 99.95th=[ 1074], 00:18:29.406 | 99.99th=[ 1074] 00:18:29.406 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:18:29.406 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:29.406 lat (usec) : 500=64.55%, 750=35.09%, 1000=0.26% 00:18:29.406 lat (msec) : 2=0.10% 00:18:29.406 cpu : usr=1.90%, sys=3.30%, ctx=1952, majf=0, minf=2 00:18:29.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:29.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.406 issued rwts: total=925,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:29.406 job1: (groupid=0, jobs=1): err= 0: pid=949103: Sat Jul 20 17:54:03 2024 00:18:29.406 read: IOPS=371, BW=1486KiB/s (1521kB/s)(1520KiB/1023msec) 00:18:29.406 slat (nsec): min=6636, max=40954, avg=13360.82, stdev=2567.61 00:18:29.406 clat (usec): min=522, max=41729, avg=1908.51, stdev=7101.25 00:18:29.406 lat (usec): min=535, max=41762, avg=1921.87, stdev=7102.58 00:18:29.406 clat percentiles (usec): 00:18:29.406 | 1.00th=[ 562], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 603], 00:18:29.406 | 30.00th=[ 611], 40.00th=[ 619], 50.00th=[ 619], 60.00th=[ 627], 00:18:29.406 | 70.00th=[ 644], 80.00th=[ 660], 90.00th=[ 676], 95.00th=[ 709], 00:18:29.406 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:29.406 | 99.99th=[41681] 00:18:29.406 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:18:29.406 slat (usec): min=6, max=36796, avg=97.17, stdev=1639.37 00:18:29.406 clat (usec): min=295, max=1418, avg=466.41, stdev=125.79 00:18:29.406 lat (usec): min=302, max=37220, avg=563.57, stdev=1641.65 00:18:29.406 clat percentiles (usec): 00:18:29.406 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 343], 20.00th=[ 371], 00:18:29.406 | 30.00th=[ 404], 40.00th=[ 429], 50.00th=[ 449], 60.00th=[ 469], 00:18:29.406 | 70.00th=[ 490], 80.00th=[ 529], 90.00th=[ 578], 95.00th=[ 734], 00:18:29.407 | 99.00th=[ 930], 99.50th=[ 996], 99.90th=[ 1418], 99.95th=[ 1418], 00:18:29.407 | 99.99th=[ 1418] 00:18:29.407 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:18:29.407 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:29.407 lat (usec) : 500=41.93%, 750=53.81%, 1000=2.69% 00:18:29.407 lat (msec) : 2=0.22%, 50=1.35% 00:18:29.407 cpu : usr=0.59%, sys=1.37%, ctx=898, majf=0, minf=1 00:18:29.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:29.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.407 issued rwts: total=380,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:29.407 job2: (groupid=0, jobs=1): err= 0: pid=949104: Sat Jul 20 17:54:03 2024 00:18:29.407 read: IOPS=18, BW=73.5KiB/s (75.3kB/s)(76.0KiB/1034msec) 00:18:29.407 slat (nsec): min=9960, max=36934, avg=19417.89, stdev=7037.10 00:18:29.407 clat (usec): min=40910, max=41619, avg=41003.90, stdev=152.94 00:18:29.407 lat (usec): min=40926, max=41629, avg=41023.32, stdev=150.68 00:18:29.407 clat percentiles (usec): 00:18:29.407 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:29.407 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:29.407 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:18:29.407 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:29.407 | 99.99th=[41681] 00:18:29.407 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:18:29.407 slat (nsec): min=6992, max=63388, avg=17089.56, stdev=7303.85 00:18:29.407 clat (usec): min=302, max=905, avg=475.26, stdev=74.45 00:18:29.407 lat (usec): min=318, max=913, avg=492.35, stdev=74.74 00:18:29.407 clat percentiles (usec): 00:18:29.407 | 1.00th=[ 318], 5.00th=[ 359], 10.00th=[ 379], 20.00th=[ 424], 00:18:29.407 | 30.00th=[ 437], 40.00th=[ 453], 50.00th=[ 469], 60.00th=[ 482], 00:18:29.407 | 70.00th=[ 502], 80.00th=[ 529], 90.00th=[ 578], 95.00th=[ 603], 00:18:29.407 | 99.00th=[ 660], 99.50th=[ 685], 99.90th=[ 906], 99.95th=[ 906], 00:18:29.407 | 99.99th=[ 906] 00:18:29.407 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:18:29.407 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:29.407 lat (usec) : 500=67.04%, 750=29.00%, 1000=0.38% 00:18:29.407 lat (msec) : 50=3.58% 00:18:29.407 cpu : usr=0.48%, sys=0.97%, ctx=532, majf=0, minf=1 00:18:29.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:29.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.407 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:29.407 job3: (groupid=0, jobs=1): err= 0: pid=949105: Sat Jul 20 17:54:03 2024 00:18:29.407 read: IOPS=96, BW=386KiB/s (395kB/s)(400KiB/1036msec) 00:18:29.407 slat (nsec): min=6568, max=34147, avg=15279.53, stdev=5511.40 00:18:29.407 clat (usec): min=573, max=42971, avg=7710.92, stdev=15587.58 00:18:29.407 lat (usec): min=591, max=42985, avg=7726.20, stdev=15588.80 00:18:29.407 clat percentiles (usec): 00:18:29.407 | 1.00th=[ 570], 5.00th=[ 594], 10.00th=[ 603], 20.00th=[ 627], 00:18:29.407 | 30.00th=[ 635], 40.00th=[ 652], 50.00th=[ 701], 60.00th=[ 750], 00:18:29.407 | 70.00th=[ 783], 80.00th=[ 832], 90.00th=[42206], 95.00th=[42206], 00:18:29.407 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:29.407 | 99.99th=[42730] 00:18:29.407 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:18:29.407 slat (nsec): min=6580, max=42835, avg=15321.01, stdev=5302.94 00:18:29.407 clat (usec): min=305, max=1159, avg=493.48, stdev=184.10 00:18:29.407 lat (usec): min=315, max=1177, avg=508.80, stdev=185.37 00:18:29.407 clat percentiles (usec): 00:18:29.407 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 338], 20.00th=[ 359], 00:18:29.407 | 30.00th=[ 383], 40.00th=[ 404], 50.00th=[ 441], 60.00th=[ 461], 00:18:29.407 | 70.00th=[ 490], 80.00th=[ 570], 90.00th=[ 840], 95.00th=[ 947], 00:18:29.407 | 99.00th=[ 1029], 99.50th=[ 1090], 99.90th=[ 1156], 99.95th=[ 1156], 00:18:29.407 | 99.99th=[ 1156] 00:18:29.407 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:18:29.407 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:29.407 lat (usec) : 500=60.95%, 750=22.06%, 1000=12.42% 00:18:29.407 lat (msec) : 2=1.80%, 50=2.78% 00:18:29.407 cpu : usr=0.48%, sys=0.77%, ctx=615, majf=0, minf=1 00:18:29.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:29.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.407 issued rwts: total=100,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:29.407 00:18:29.407 Run status group 0 (all jobs): 00:18:29.407 READ: bw=5498KiB/s (5630kB/s), 73.5KiB/s-3696KiB/s (75.3kB/s-3785kB/s), io=5696KiB (5833kB), run=1001-1036msec 00:18:29.407 WRITE: bw=9884KiB/s (10.1MB/s), 1977KiB/s-4092KiB/s (2024kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1036msec 00:18:29.407 00:18:29.407 Disk stats (read/write): 00:18:29.407 nvme0n1: ios=701/1024, merge=0/0, ticks=719/425, in_queue=1144, util=87.37% 00:18:29.407 nvme0n2: ios=420/512, merge=0/0, ticks=790/232, in_queue=1022, util=91.35% 00:18:29.407 nvme0n3: ios=68/512, merge=0/0, ticks=850/241, in_queue=1091, util=95.29% 00:18:29.407 nvme0n4: ios=94/512, merge=0/0, ticks=952/250, in_queue=1202, util=94.30% 00:18:29.407 17:54:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:29.407 [global] 00:18:29.407 thread=1 00:18:29.407 invalidate=1 00:18:29.407 rw=randwrite 00:18:29.407 time_based=1 00:18:29.407 runtime=1 00:18:29.407 ioengine=libaio 00:18:29.407 direct=1 00:18:29.407 bs=4096 00:18:29.407 iodepth=1 00:18:29.407 norandommap=0 00:18:29.407 numjobs=1 00:18:29.407 00:18:29.407 verify_dump=1 00:18:29.407 verify_backlog=512 00:18:29.407 verify_state_save=0 00:18:29.407 do_verify=1 00:18:29.407 verify=crc32c-intel 00:18:29.407 [job0] 00:18:29.407 filename=/dev/nvme0n1 00:18:29.407 [job1] 00:18:29.407 filename=/dev/nvme0n2 00:18:29.407 [job2] 00:18:29.407 filename=/dev/nvme0n3 00:18:29.407 [job3] 00:18:29.407 filename=/dev/nvme0n4 00:18:29.407 Could not set queue depth (nvme0n1) 00:18:29.407 Could not set queue depth (nvme0n2) 00:18:29.407 Could not set queue depth (nvme0n3) 00:18:29.407 Could not set queue depth (nvme0n4) 00:18:29.670 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:29.670 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:29.670 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:29.670 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:29.670 fio-3.35 00:18:29.670 Starting 4 threads 00:18:31.044 00:18:31.044 job0: (groupid=0, jobs=1): err= 0: pid=949338: Sat Jul 20 17:54:05 2024 00:18:31.044 read: IOPS=499, BW=1996KiB/s (2044kB/s)(2076KiB/1040msec) 00:18:31.044 slat (nsec): min=6262, max=57437, avg=12355.35, stdev=5492.51 00:18:31.044 clat (usec): min=426, max=41329, avg=1047.40, stdev=4331.65 00:18:31.044 lat (usec): min=433, max=41342, avg=1059.76, stdev=4332.96 00:18:31.044 clat percentiles (usec): 00:18:31.044 | 1.00th=[ 437], 5.00th=[ 469], 10.00th=[ 486], 20.00th=[ 510], 00:18:31.044 | 30.00th=[ 529], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 594], 00:18:31.044 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 685], 95.00th=[ 750], 00:18:31.044 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:31.044 | 99.99th=[41157] 00:18:31.044 write: IOPS=984, BW=3938KiB/s (4033kB/s)(4096KiB/1040msec); 0 zone resets 00:18:31.044 slat (nsec): min=7314, max=72355, avg=21766.77, stdev=12237.01 00:18:31.044 clat (usec): min=305, max=628, avg=448.50, stdev=74.59 00:18:31.044 lat (usec): min=313, max=676, avg=470.27, stdev=81.98 00:18:31.044 clat percentiles (usec): 00:18:31.044 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 347], 20.00th=[ 379], 00:18:31.044 | 30.00th=[ 400], 40.00th=[ 420], 50.00th=[ 453], 60.00th=[ 474], 00:18:31.044 | 70.00th=[ 494], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 570], 00:18:31.044 | 99.00th=[ 586], 99.50th=[ 619], 99.90th=[ 627], 99.95th=[ 627], 00:18:31.044 | 99.99th=[ 627] 00:18:31.044 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=2 00:18:31.044 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:18:31.044 lat (usec) : 500=53.60%, 750=44.72%, 1000=1.23% 00:18:31.044 lat (msec) : 2=0.06%, 50=0.39% 00:18:31.044 cpu : usr=1.83%, sys=3.66%, ctx=1544, majf=0, minf=1 00:18:31.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:31.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.044 issued rwts: total=519,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:31.044 job1: (groupid=0, jobs=1): err= 0: pid=949339: Sat Jul 20 17:54:05 2024 00:18:31.044 read: IOPS=333, BW=1335KiB/s (1367kB/s)(1368KiB/1025msec) 00:18:31.044 slat (nsec): min=7050, max=48209, avg=14394.63, stdev=4257.14 00:18:31.044 clat (usec): min=468, max=41767, avg=1901.79, stdev=7177.94 00:18:31.044 lat (usec): min=481, max=41780, avg=1916.19, stdev=7179.02 00:18:31.044 clat percentiles (usec): 00:18:31.044 | 1.00th=[ 486], 5.00th=[ 510], 10.00th=[ 523], 20.00th=[ 545], 00:18:31.044 | 30.00th=[ 562], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 603], 00:18:31.044 | 70.00th=[ 611], 80.00th=[ 627], 90.00th=[ 685], 95.00th=[ 922], 00:18:31.044 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:31.044 | 99.99th=[41681] 00:18:31.044 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:18:31.044 slat (nsec): min=8801, max=73820, avg=27138.10, stdev=12552.94 00:18:31.044 clat (usec): min=387, max=1651, avg=682.91, stdev=154.31 00:18:31.044 lat (usec): min=403, max=1670, avg=710.04, stdev=156.61 00:18:31.044 clat percentiles (usec): 00:18:31.044 | 1.00th=[ 404], 5.00th=[ 498], 10.00th=[ 537], 20.00th=[ 578], 00:18:31.044 | 30.00th=[ 611], 40.00th=[ 635], 50.00th=[ 652], 60.00th=[ 676], 00:18:31.044 | 70.00th=[ 709], 80.00th=[ 758], 90.00th=[ 873], 95.00th=[ 988], 00:18:31.044 | 99.00th=[ 1254], 99.50th=[ 1483], 99.90th=[ 1647], 99.95th=[ 1647], 00:18:31.044 | 99.99th=[ 1647] 00:18:31.044 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:18:31.044 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:31.044 lat (usec) : 500=4.10%, 750=81.03%, 1000=10.42% 00:18:31.044 lat (msec) : 2=3.16%, 50=1.29% 00:18:31.044 cpu : usr=1.46%, sys=1.95%, ctx=855, majf=0, minf=1 00:18:31.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:31.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.044 issued rwts: total=342,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:31.044 job2: (groupid=0, jobs=1): err= 0: pid=949340: Sat Jul 20 17:54:05 2024 00:18:31.044 read: IOPS=228, BW=914KiB/s (936kB/s)(936KiB/1024msec) 00:18:31.044 slat (nsec): min=12245, max=34516, avg=13860.18, stdev=3441.66 00:18:31.044 clat (usec): min=698, max=42164, avg=3260.31, stdev=9681.18 00:18:31.044 lat (usec): min=710, max=42177, avg=3274.17, stdev=9681.84 00:18:31.044 clat percentiles (usec): 00:18:31.044 | 1.00th=[ 734], 5.00th=[ 766], 10.00th=[ 766], 20.00th=[ 783], 00:18:31.044 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 832], 00:18:31.044 | 70.00th=[ 848], 80.00th=[ 881], 90.00th=[ 947], 95.00th=[41157], 00:18:31.044 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:31.044 | 99.99th=[42206] 00:18:31.044 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:18:31.044 slat (nsec): min=7929, max=75770, avg=29832.95, stdev=12262.00 00:18:31.044 clat (usec): min=307, max=1189, avg=462.69, stdev=145.55 00:18:31.044 lat (usec): min=319, max=1227, avg=492.53, stdev=147.79 00:18:31.044 clat percentiles (usec): 00:18:31.044 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 367], 00:18:31.044 | 30.00th=[ 383], 40.00th=[ 408], 50.00th=[ 441], 60.00th=[ 457], 00:18:31.044 | 70.00th=[ 474], 80.00th=[ 498], 90.00th=[ 635], 95.00th=[ 824], 00:18:31.044 | 99.00th=[ 996], 99.50th=[ 1037], 99.90th=[ 1188], 99.95th=[ 1188], 00:18:31.044 | 99.99th=[ 1188] 00:18:31.044 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:18:31.044 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:31.044 lat (usec) : 500=55.63%, 750=8.45%, 1000=33.38% 00:18:31.044 lat (msec) : 2=0.67%, 50=1.88% 00:18:31.044 cpu : usr=0.98%, sys=1.76%, ctx=747, majf=0, minf=2 00:18:31.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:31.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.044 issued rwts: total=234,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:31.044 job3: (groupid=0, jobs=1): err= 0: pid=949341: Sat Jul 20 17:54:05 2024 00:18:31.044 read: IOPS=299, BW=1198KiB/s (1227kB/s)(1240KiB/1035msec) 00:18:31.044 slat (nsec): min=7985, max=35961, avg=14016.29, stdev=4457.36 00:18:31.044 clat (usec): min=506, max=41018, avg=2090.94, stdev=7466.18 00:18:31.044 lat (usec): min=520, max=41033, avg=2104.95, stdev=7468.29 00:18:31.044 clat percentiles (usec): 00:18:31.044 | 1.00th=[ 519], 5.00th=[ 537], 10.00th=[ 545], 20.00th=[ 562], 00:18:31.044 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 619], 60.00th=[ 676], 00:18:31.044 | 70.00th=[ 766], 80.00th=[ 791], 90.00th=[ 848], 95.00th=[ 906], 00:18:31.044 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:31.044 | 99.99th=[41157] 00:18:31.044 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:18:31.044 slat (nsec): min=10876, max=80063, avg=29670.18, stdev=13025.84 00:18:31.044 clat (usec): min=401, max=1659, avg=706.10, stdev=146.31 00:18:31.044 lat (usec): min=437, max=1679, avg=735.77, stdev=147.37 00:18:31.044 clat percentiles (usec): 00:18:31.044 | 1.00th=[ 424], 5.00th=[ 523], 10.00th=[ 553], 20.00th=[ 603], 00:18:31.044 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 701], 00:18:31.044 | 70.00th=[ 750], 80.00th=[ 816], 90.00th=[ 898], 95.00th=[ 963], 00:18:31.044 | 99.00th=[ 1188], 99.50th=[ 1254], 99.90th=[ 1663], 99.95th=[ 1663], 00:18:31.044 | 99.99th=[ 1663] 00:18:31.044 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:18:31.044 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:31.044 lat (usec) : 500=1.95%, 750=67.40%, 1000=27.13% 00:18:31.044 lat (msec) : 2=2.19%, 50=1.34% 00:18:31.044 cpu : usr=1.26%, sys=2.32%, ctx=825, majf=0, minf=1 00:18:31.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:31.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.044 issued rwts: total=310,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:31.044 00:18:31.044 Run status group 0 (all jobs): 00:18:31.044 READ: bw=5404KiB/s (5534kB/s), 914KiB/s-1996KiB/s (936kB/s-2044kB/s), io=5620KiB (5755kB), run=1024-1040msec 00:18:31.044 WRITE: bw=9846KiB/s (10.1MB/s), 1979KiB/s-3938KiB/s (2026kB/s-4033kB/s), io=10.0MiB (10.5MB), run=1024-1040msec 00:18:31.044 00:18:31.044 Disk stats (read/write): 00:18:31.044 nvme0n1: ios=563/1024, merge=0/0, ticks=385/448, in_queue=833, util=88.98% 00:18:31.044 nvme0n2: ios=387/512, merge=0/0, ticks=1163/340, in_queue=1503, util=94.11% 00:18:31.044 nvme0n3: ios=286/512, merge=0/0, ticks=671/216, in_queue=887, util=97.18% 00:18:31.044 nvme0n4: ios=329/512, merge=0/0, ticks=1427/332, in_queue=1759, util=98.63% 00:18:31.044 17:54:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:31.044 [global] 00:18:31.044 thread=1 00:18:31.044 invalidate=1 00:18:31.044 rw=write 00:18:31.044 time_based=1 00:18:31.044 runtime=1 00:18:31.044 ioengine=libaio 00:18:31.044 direct=1 00:18:31.044 bs=4096 00:18:31.044 iodepth=128 00:18:31.044 norandommap=0 00:18:31.044 numjobs=1 00:18:31.044 00:18:31.044 verify_dump=1 00:18:31.044 verify_backlog=512 00:18:31.044 verify_state_save=0 00:18:31.044 do_verify=1 00:18:31.044 verify=crc32c-intel 00:18:31.044 [job0] 00:18:31.044 filename=/dev/nvme0n1 00:18:31.044 [job1] 00:18:31.044 filename=/dev/nvme0n2 00:18:31.044 [job2] 00:18:31.044 filename=/dev/nvme0n3 00:18:31.044 [job3] 00:18:31.044 filename=/dev/nvme0n4 00:18:31.044 Could not set queue depth (nvme0n1) 00:18:31.044 Could not set queue depth (nvme0n2) 00:18:31.044 Could not set queue depth (nvme0n3) 00:18:31.044 Could not set queue depth (nvme0n4) 00:18:31.044 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:31.044 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:31.044 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:31.044 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:31.044 fio-3.35 00:18:31.044 Starting 4 threads 00:18:32.414 00:18:32.414 job0: (groupid=0, jobs=1): err= 0: pid=949566: Sat Jul 20 17:54:06 2024 00:18:32.414 read: IOPS=1444, BW=5780KiB/s (5918kB/s)(5872KiB/1016msec) 00:18:32.414 slat (usec): min=3, max=68503, avg=285.67, stdev=2461.50 00:18:32.414 clat (msec): min=5, max=119, avg=34.75, stdev=26.07 00:18:32.414 lat (msec): min=10, max=119, avg=35.03, stdev=26.20 00:18:32.414 clat percentiles (msec): 00:18:32.414 | 1.00th=[ 15], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 18], 00:18:32.414 | 30.00th=[ 20], 40.00th=[ 22], 50.00th=[ 25], 60.00th=[ 29], 00:18:32.414 | 70.00th=[ 34], 80.00th=[ 45], 90.00th=[ 75], 95.00th=[ 104], 00:18:32.414 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 120], 99.95th=[ 120], 00:18:32.414 | 99.99th=[ 120] 00:18:32.414 write: IOPS=1511, BW=6047KiB/s (6192kB/s)(6144KiB/1016msec); 0 zone resets 00:18:32.414 slat (usec): min=6, max=73505, avg=375.58, stdev=3115.32 00:18:32.414 clat (usec): min=1553, max=159706, avg=48275.55, stdev=44999.86 00:18:32.414 lat (msec): min=3, max=159, avg=48.65, stdev=45.22 00:18:32.414 clat percentiles (msec): 00:18:32.414 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 15], 00:18:32.414 | 30.00th=[ 18], 40.00th=[ 21], 50.00th=[ 26], 60.00th=[ 32], 00:18:32.414 | 70.00th=[ 65], 80.00th=[ 89], 90.00th=[ 132], 95.00th=[ 144], 00:18:32.414 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 161], 99.95th=[ 161], 00:18:32.414 | 99.99th=[ 161] 00:18:32.414 bw ( KiB/s): min= 3408, max= 8880, per=16.47%, avg=6144.00, stdev=3869.29, samples=2 00:18:32.414 iops : min= 852, max= 2220, avg=1536.00, stdev=967.32, samples=2 00:18:32.414 lat (msec) : 2=0.03%, 4=0.63%, 10=5.33%, 20=31.19%, 50=36.88% 00:18:32.414 lat (msec) : 100=14.51%, 250=11.42% 00:18:32.414 cpu : usr=1.97%, sys=2.27%, ctx=152, majf=0, minf=17 00:18:32.414 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:18:32.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:32.414 issued rwts: total=1468,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.414 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:32.414 job1: (groupid=0, jobs=1): err= 0: pid=949567: Sat Jul 20 17:54:06 2024 00:18:32.414 read: IOPS=2017, BW=8071KiB/s (8265kB/s)(8192KiB/1015msec) 00:18:32.414 slat (usec): min=2, max=78342, avg=216.09, stdev=2204.03 00:18:32.414 clat (msec): min=4, max=113, avg=26.74, stdev=25.06 00:18:32.414 lat (msec): min=4, max=113, avg=26.95, stdev=25.20 00:18:32.414 clat percentiles (msec): 00:18:32.414 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:18:32.414 | 30.00th=[ 13], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 21], 00:18:32.414 | 70.00th=[ 26], 80.00th=[ 31], 90.00th=[ 87], 95.00th=[ 93], 00:18:32.414 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 109], 99.95th=[ 109], 00:18:32.414 | 99.99th=[ 113] 00:18:32.414 write: IOPS=2460, BW=9840KiB/s (10.1MB/s)(9988KiB/1015msec); 0 zone resets 00:18:32.414 slat (usec): min=3, max=71930, avg=208.92, stdev=2238.76 00:18:32.414 clat (usec): min=1377, max=143487, avg=29500.42, stdev=32146.10 00:18:32.414 lat (usec): min=1398, max=143492, avg=29709.34, stdev=32301.84 00:18:32.414 clat percentiles (msec): 00:18:32.414 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 11], 00:18:32.414 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 17], 60.00th=[ 20], 00:18:32.414 | 70.00th=[ 23], 80.00th=[ 45], 90.00th=[ 74], 95.00th=[ 118], 00:18:32.414 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:18:32.414 | 99.99th=[ 144] 00:18:32.414 bw ( KiB/s): min= 6664, max=12288, per=25.39%, avg=9476.00, stdev=3976.77, samples=2 00:18:32.414 iops : min= 1666, max= 3072, avg=2369.00, stdev=994.19, samples=2 00:18:32.414 lat (msec) : 2=0.07%, 4=0.97%, 10=12.45%, 20=46.73%, 50=25.08% 00:18:32.414 lat (msec) : 100=9.90%, 250=4.80% 00:18:32.414 cpu : usr=1.68%, sys=2.66%, ctx=263, majf=0, minf=13 00:18:32.414 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:32.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:32.414 issued rwts: total=2048,2497,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.414 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:32.414 job2: (groupid=0, jobs=1): err= 0: pid=949568: Sat Jul 20 17:54:06 2024 00:18:32.414 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:18:32.414 slat (usec): min=2, max=49671, avg=200.93, stdev=1868.56 00:18:32.414 clat (msec): min=3, max=186, avg=25.72, stdev=25.84 00:18:32.414 lat (msec): min=3, max=186, avg=25.92, stdev=25.93 00:18:32.414 clat percentiles (msec): 00:18:32.414 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 12], 00:18:32.414 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 18], 00:18:32.415 | 70.00th=[ 31], 80.00th=[ 39], 90.00th=[ 48], 95.00th=[ 65], 00:18:32.415 | 99.00th=[ 182], 99.50th=[ 184], 99.90th=[ 188], 99.95th=[ 188], 00:18:32.415 | 99.99th=[ 188] 00:18:32.415 write: IOPS=2879, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1002msec); 0 zone resets 00:18:32.415 slat (usec): min=3, max=23928, avg=147.94, stdev=872.34 00:18:32.415 clat (usec): min=1413, max=98876, avg=21248.02, stdev=13141.07 00:18:32.415 lat (usec): min=1824, max=98880, avg=21395.96, stdev=13176.02 00:18:32.415 clat percentiles (usec): 00:18:32.415 | 1.00th=[ 5538], 5.00th=[ 8094], 10.00th=[10290], 20.00th=[12125], 00:18:32.415 | 30.00th=[14222], 40.00th=[15795], 50.00th=[17957], 60.00th=[19268], 00:18:32.415 | 70.00th=[20841], 80.00th=[30278], 90.00th=[40633], 95.00th=[43254], 00:18:32.415 | 99.00th=[58459], 99.50th=[99091], 99.90th=[99091], 99.95th=[99091], 00:18:32.415 | 99.99th=[99091] 00:18:32.415 bw ( KiB/s): min=11008, max=11056, per=29.56%, avg=11032.00, stdev=33.94, samples=2 00:18:32.415 iops : min= 2752, max= 2764, avg=2758.00, stdev= 8.49, samples=2 00:18:32.415 lat (msec) : 2=0.17%, 4=0.31%, 10=8.54%, 20=55.96%, 50=29.61% 00:18:32.415 lat (msec) : 100=4.28%, 250=1.14% 00:18:32.415 cpu : usr=1.50%, sys=3.10%, ctx=341, majf=0, minf=9 00:18:32.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:32.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:32.415 issued rwts: total=2560,2885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:32.415 job3: (groupid=0, jobs=1): err= 0: pid=949569: Sat Jul 20 17:54:06 2024 00:18:32.415 read: IOPS=2078, BW=8315KiB/s (8515kB/s)(8448KiB/1016msec) 00:18:32.415 slat (usec): min=2, max=72977, avg=227.75, stdev=2641.21 00:18:32.415 clat (usec): min=1520, max=101288, avg=24190.92, stdev=22295.11 00:18:32.415 lat (usec): min=1856, max=101293, avg=24418.67, stdev=22428.57 00:18:32.415 clat percentiles (msec): 00:18:32.415 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 14], 00:18:32.415 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 20], 00:18:32.415 | 70.00th=[ 21], 80.00th=[ 29], 90.00th=[ 71], 95.00th=[ 87], 00:18:32.415 | 99.00th=[ 95], 99.50th=[ 95], 99.90th=[ 96], 99.95th=[ 96], 00:18:32.415 | 99.99th=[ 102] 00:18:32.415 write: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.0MiB/1016msec); 0 zone resets 00:18:32.415 slat (usec): min=3, max=36874, avg=191.54, stdev=1265.69 00:18:32.415 clat (usec): min=1137, max=122920, avg=30368.17, stdev=19354.98 00:18:32.415 lat (usec): min=1148, max=122925, avg=30559.72, stdev=19428.33 00:18:32.415 clat percentiles (msec): 00:18:32.415 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 15], 20.00th=[ 18], 00:18:32.415 | 30.00th=[ 21], 40.00th=[ 25], 50.00th=[ 27], 60.00th=[ 30], 00:18:32.415 | 70.00th=[ 33], 80.00th=[ 37], 90.00th=[ 47], 95.00th=[ 87], 00:18:32.415 | 99.00th=[ 103], 99.50th=[ 103], 99.90th=[ 104], 99.95th=[ 122], 00:18:32.415 | 99.99th=[ 124] 00:18:32.415 bw ( KiB/s): min= 7680, max=12288, per=26.76%, avg=9984.00, stdev=3258.35, samples=2 00:18:32.415 iops : min= 1920, max= 3072, avg=2496.00, stdev=814.59, samples=2 00:18:32.415 lat (msec) : 2=0.49%, 4=1.90%, 10=5.48%, 20=37.26%, 50=44.37% 00:18:32.415 lat (msec) : 100=9.05%, 250=1.43% 00:18:32.415 cpu : usr=1.58%, sys=2.86%, ctx=366, majf=0, minf=11 00:18:32.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:32.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:32.415 issued rwts: total=2112,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:32.415 00:18:32.415 Run status group 0 (all jobs): 00:18:32.415 READ: bw=31.5MiB/s (33.0MB/s), 5780KiB/s-9.98MiB/s (5918kB/s-10.5MB/s), io=32.0MiB (33.5MB), run=1002-1016msec 00:18:32.415 WRITE: bw=36.4MiB/s (38.2MB/s), 6047KiB/s-11.2MiB/s (6192kB/s-11.8MB/s), io=37.0MiB (38.8MB), run=1002-1016msec 00:18:32.415 00:18:32.415 Disk stats (read/write): 00:18:32.415 nvme0n1: ios=1049/1527, merge=0/0, ticks=23727/72047, in_queue=95774, util=93.79% 00:18:32.415 nvme0n2: ios=1876/2094, merge=0/0, ticks=27643/31213, in_queue=58856, util=100.00% 00:18:32.415 nvme0n3: ios=2166/2560, merge=0/0, ticks=38878/33665, in_queue=72543, util=98.64% 00:18:32.415 nvme0n4: ios=2099/2198, merge=0/0, ticks=28183/33806, in_queue=61989, util=92.42% 00:18:32.415 17:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:32.415 [global] 00:18:32.415 thread=1 00:18:32.415 invalidate=1 00:18:32.415 rw=randwrite 00:18:32.415 time_based=1 00:18:32.415 runtime=1 00:18:32.415 ioengine=libaio 00:18:32.415 direct=1 00:18:32.415 bs=4096 00:18:32.415 iodepth=128 00:18:32.415 norandommap=0 00:18:32.415 numjobs=1 00:18:32.415 00:18:32.415 verify_dump=1 00:18:32.415 verify_backlog=512 00:18:32.415 verify_state_save=0 00:18:32.415 do_verify=1 00:18:32.415 verify=crc32c-intel 00:18:32.415 [job0] 00:18:32.415 filename=/dev/nvme0n1 00:18:32.415 [job1] 00:18:32.415 filename=/dev/nvme0n2 00:18:32.415 [job2] 00:18:32.415 filename=/dev/nvme0n3 00:18:32.415 [job3] 00:18:32.415 filename=/dev/nvme0n4 00:18:32.415 Could not set queue depth (nvme0n1) 00:18:32.415 Could not set queue depth (nvme0n2) 00:18:32.415 Could not set queue depth (nvme0n3) 00:18:32.415 Could not set queue depth (nvme0n4) 00:18:32.415 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:32.415 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:32.415 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:32.415 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:32.415 fio-3.35 00:18:32.415 Starting 4 threads 00:18:33.793 00:18:33.793 job0: (groupid=0, jobs=1): err= 0: pid=949880: Sat Jul 20 17:54:08 2024 00:18:33.793 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:18:33.793 slat (usec): min=3, max=17100, avg=201.62, stdev=1178.65 00:18:33.793 clat (usec): min=9397, max=50890, avg=25109.75, stdev=11846.92 00:18:33.793 lat (usec): min=9990, max=50897, avg=25311.37, stdev=11900.09 00:18:33.793 clat percentiles (usec): 00:18:33.793 | 1.00th=[10814], 5.00th=[11994], 10.00th=[13304], 20.00th=[14353], 00:18:33.793 | 30.00th=[16909], 40.00th=[19006], 50.00th=[21365], 60.00th=[22414], 00:18:33.793 | 70.00th=[28443], 80.00th=[39584], 90.00th=[45351], 95.00th=[48497], 00:18:33.793 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:18:33.793 | 99.99th=[51119] 00:18:33.793 write: IOPS=2369, BW=9477KiB/s (9704kB/s)(9524KiB/1005msec); 0 zone resets 00:18:33.793 slat (usec): min=5, max=9494, avg=239.77, stdev=838.38 00:18:33.793 clat (usec): min=690, max=54410, avg=31732.52, stdev=9398.43 00:18:33.793 lat (usec): min=8686, max=54425, avg=31972.29, stdev=9427.85 00:18:33.793 clat percentiles (usec): 00:18:33.793 | 1.00th=[ 8848], 5.00th=[18220], 10.00th=[20579], 20.00th=[23725], 00:18:33.793 | 30.00th=[26084], 40.00th=[28181], 50.00th=[30540], 60.00th=[33162], 00:18:33.793 | 70.00th=[36963], 80.00th=[40109], 90.00th=[44827], 95.00th=[47449], 00:18:33.793 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:18:33.793 | 99.99th=[54264] 00:18:33.793 bw ( KiB/s): min= 7256, max=10768, per=21.42%, avg=9012.00, stdev=2483.36, samples=2 00:18:33.793 iops : min= 1814, max= 2692, avg=2253.00, stdev=620.84, samples=2 00:18:33.793 lat (usec) : 750=0.02% 00:18:33.793 lat (msec) : 10=0.84%, 20=24.36%, 50=72.07%, 100=2.71% 00:18:33.793 cpu : usr=2.39%, sys=4.18%, ctx=423, majf=0, minf=1 00:18:33.793 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:33.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:33.793 issued rwts: total=2048,2381,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:33.793 job1: (groupid=0, jobs=1): err= 0: pid=949900: Sat Jul 20 17:54:08 2024 00:18:33.793 read: IOPS=2994, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1003msec) 00:18:33.793 slat (usec): min=3, max=9874, avg=137.96, stdev=822.17 00:18:33.793 clat (usec): min=1931, max=37210, avg=16669.24, stdev=4750.95 00:18:33.793 lat (usec): min=2285, max=37216, avg=16807.20, stdev=4819.41 00:18:33.793 clat percentiles (usec): 00:18:33.793 | 1.00th=[ 4178], 5.00th=[10814], 10.00th=[11600], 20.00th=[12518], 00:18:33.793 | 30.00th=[13960], 40.00th=[15139], 50.00th=[16057], 60.00th=[17433], 00:18:33.793 | 70.00th=[19006], 80.00th=[20055], 90.00th=[22414], 95.00th=[25297], 00:18:33.793 | 99.00th=[31065], 99.50th=[33424], 99.90th=[36963], 99.95th=[36963], 00:18:33.793 | 99.99th=[36963] 00:18:33.793 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:18:33.793 slat (usec): min=4, max=16026, avg=181.65, stdev=811.52 00:18:33.793 clat (usec): min=7558, max=56065, avg=24925.02, stdev=11942.07 00:18:33.793 lat (usec): min=7587, max=56075, avg=25106.68, stdev=12026.70 00:18:33.793 clat percentiles (usec): 00:18:33.793 | 1.00th=[11207], 5.00th=[13173], 10.00th=[14484], 20.00th=[16319], 00:18:33.793 | 30.00th=[16909], 40.00th=[18220], 50.00th=[19792], 60.00th=[21890], 00:18:33.793 | 70.00th=[25035], 80.00th=[36963], 90.00th=[46400], 95.00th=[51119], 00:18:33.793 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:18:33.793 | 99.99th=[55837] 00:18:33.793 bw ( KiB/s): min=10824, max=13779, per=29.23%, avg=12301.50, stdev=2089.50, samples=2 00:18:33.793 iops : min= 2706, max= 3444, avg=3075.00, stdev=521.84, samples=2 00:18:33.793 lat (msec) : 2=0.02%, 4=0.44%, 10=1.33%, 20=63.80%, 50=30.83% 00:18:33.793 lat (msec) : 100=3.57% 00:18:33.793 cpu : usr=3.29%, sys=5.79%, ctx=367, majf=0, minf=1 00:18:33.793 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:33.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:33.793 issued rwts: total=3003,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:33.793 job2: (groupid=0, jobs=1): err= 0: pid=949922: Sat Jul 20 17:54:08 2024 00:18:33.793 read: IOPS=1707, BW=6830KiB/s (6994kB/s)(6864KiB/1005msec) 00:18:33.793 slat (usec): min=2, max=10955, avg=180.25, stdev=1003.66 00:18:33.793 clat (usec): min=1631, max=48976, avg=21696.69, stdev=7296.77 00:18:33.793 lat (usec): min=6391, max=50790, avg=21876.94, stdev=7357.70 00:18:33.793 clat percentiles (usec): 00:18:33.793 | 1.00th=[ 6521], 5.00th=[11469], 10.00th=[12256], 20.00th=[14877], 00:18:33.793 | 30.00th=[17957], 40.00th=[19006], 50.00th=[20579], 60.00th=[23462], 00:18:33.793 | 70.00th=[25822], 80.00th=[27132], 90.00th=[30802], 95.00th=[34866], 00:18:33.793 | 99.00th=[42206], 99.50th=[44303], 99.90th=[49021], 99.95th=[49021], 00:18:33.793 | 99.99th=[49021] 00:18:33.793 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:18:33.794 slat (usec): min=4, max=13545, avg=331.76, stdev=1088.27 00:18:33.794 clat (usec): min=12898, max=70121, avg=43400.05, stdev=10149.71 00:18:33.794 lat (usec): min=12903, max=70135, avg=43731.81, stdev=10201.65 00:18:33.794 clat percentiles (usec): 00:18:33.794 | 1.00th=[17695], 5.00th=[27132], 10.00th=[29754], 20.00th=[35914], 00:18:33.794 | 30.00th=[39060], 40.00th=[41681], 50.00th=[43779], 60.00th=[44827], 00:18:33.794 | 70.00th=[46924], 80.00th=[51643], 90.00th=[56361], 95.00th=[61604], 00:18:33.794 | 99.00th=[68682], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:18:33.794 | 99.99th=[69731] 00:18:33.794 bw ( KiB/s): min= 8192, max= 8192, per=19.47%, avg=8192.00, stdev= 0.00, samples=2 00:18:33.794 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:18:33.794 lat (msec) : 2=0.03%, 10=1.04%, 20=20.30%, 50=65.97%, 100=12.67% 00:18:33.794 cpu : usr=1.59%, sys=2.69%, ctx=324, majf=0, minf=1 00:18:33.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:33.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:33.794 issued rwts: total=1716,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:33.794 job3: (groupid=0, jobs=1): err= 0: pid=949923: Sat Jul 20 17:54:08 2024 00:18:33.794 read: IOPS=3008, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1003msec) 00:18:33.794 slat (usec): min=3, max=16656, avg=123.33, stdev=663.58 00:18:33.794 clat (usec): min=1480, max=61943, avg=15415.68, stdev=8037.80 00:18:33.794 lat (usec): min=3166, max=61950, avg=15539.01, stdev=8083.93 00:18:33.794 clat percentiles (usec): 00:18:33.794 | 1.00th=[ 6259], 5.00th=[ 9896], 10.00th=[10945], 20.00th=[11338], 00:18:33.794 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13304], 60.00th=[13566], 00:18:33.794 | 70.00th=[14615], 80.00th=[16581], 90.00th=[20841], 95.00th=[30540], 00:18:33.794 | 99.00th=[50070], 99.50th=[57934], 99.90th=[62129], 99.95th=[62129], 00:18:33.794 | 99.99th=[62129] 00:18:33.794 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:18:33.794 slat (usec): min=4, max=18464, avg=195.87, stdev=693.21 00:18:33.794 clat (usec): min=8128, max=52160, avg=26116.75, stdev=11158.37 00:18:33.794 lat (usec): min=8140, max=52181, avg=26312.61, stdev=11234.78 00:18:33.794 clat percentiles (usec): 00:18:33.794 | 1.00th=[11338], 5.00th=[13435], 10.00th=[14615], 20.00th=[15795], 00:18:33.794 | 30.00th=[17171], 40.00th=[17957], 50.00th=[21627], 60.00th=[28967], 00:18:33.794 | 70.00th=[33817], 80.00th=[40109], 90.00th=[42730], 95.00th=[43779], 00:18:33.794 | 99.00th=[45876], 99.50th=[47449], 99.90th=[50070], 99.95th=[50070], 00:18:33.794 | 99.99th=[52167] 00:18:33.794 bw ( KiB/s): min= 8856, max=15720, per=29.20%, avg=12288.00, stdev=4853.58, samples=2 00:18:33.794 iops : min= 2214, max= 3930, avg=3072.00, stdev=1213.40, samples=2 00:18:33.794 lat (msec) : 2=0.02%, 4=0.30%, 10=2.58%, 20=64.63%, 50=31.77% 00:18:33.794 lat (msec) : 100=0.71% 00:18:33.794 cpu : usr=4.19%, sys=5.29%, ctx=520, majf=0, minf=1 00:18:33.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:33.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:33.794 issued rwts: total=3018,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:33.794 00:18:33.794 Run status group 0 (all jobs): 00:18:33.794 READ: bw=38.0MiB/s (39.9MB/s), 6830KiB/s-11.8MiB/s (6994kB/s-12.3MB/s), io=38.2MiB (40.1MB), run=1003-1005msec 00:18:33.794 WRITE: bw=41.1MiB/s (43.1MB/s), 8151KiB/s-12.0MiB/s (8347kB/s-12.5MB/s), io=41.3MiB (43.3MB), run=1003-1005msec 00:18:33.794 00:18:33.794 Disk stats (read/write): 00:18:33.794 nvme0n1: ios=1618/2048, merge=0/0, ticks=10411/16539, in_queue=26950, util=91.28% 00:18:33.794 nvme0n2: ios=2279/2560, merge=0/0, ticks=20469/32126, in_queue=52595, util=97.25% 00:18:33.794 nvme0n3: ios=1560/1684, merge=0/0, ticks=13165/23532, in_queue=36697, util=97.48% 00:18:33.794 nvme0n4: ios=2197/2560, merge=0/0, ticks=16019/30571, in_queue=46590, util=97.25% 00:18:33.794 17:54:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:33.794 17:54:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=950418 00:18:33.794 17:54:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:33.794 17:54:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:33.794 [global] 00:18:33.794 thread=1 00:18:33.794 invalidate=1 00:18:33.794 rw=read 00:18:33.794 time_based=1 00:18:33.794 runtime=10 00:18:33.794 ioengine=libaio 00:18:33.794 direct=1 00:18:33.794 bs=4096 00:18:33.794 iodepth=1 00:18:33.794 norandommap=1 00:18:33.794 numjobs=1 00:18:33.794 00:18:33.794 [job0] 00:18:33.794 filename=/dev/nvme0n1 00:18:33.794 [job1] 00:18:33.794 filename=/dev/nvme0n2 00:18:33.794 [job2] 00:18:33.794 filename=/dev/nvme0n3 00:18:33.794 [job3] 00:18:33.794 filename=/dev/nvme0n4 00:18:33.794 Could not set queue depth (nvme0n1) 00:18:33.794 Could not set queue depth (nvme0n2) 00:18:33.794 Could not set queue depth (nvme0n3) 00:18:33.794 Could not set queue depth (nvme0n4) 00:18:34.051 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:34.051 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:34.051 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:34.051 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:34.051 fio-3.35 00:18:34.051 Starting 4 threads 00:18:36.616 17:54:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:36.873 17:54:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:36.873 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=14483456, buflen=4096 00:18:36.873 fio: pid=950509, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:37.130 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=3792896, buflen=4096 00:18:37.130 fio: pid=950508, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:37.130 17:54:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:37.130 17:54:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:37.388 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2912256, buflen=4096 00:18:37.388 fio: pid=950506, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:37.645 17:54:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:37.645 17:54:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:37.645 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=19271680, buflen=4096 00:18:37.645 fio: pid=950507, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:37.903 17:54:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:37.903 17:54:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:37.903 00:18:37.903 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=950506: Sat Jul 20 17:54:12 2024 00:18:37.903 read: IOPS=208, BW=833KiB/s (853kB/s)(2844KiB/3416msec) 00:18:37.903 slat (usec): min=6, max=14303, avg=51.48, stdev=731.31 00:18:37.903 clat (usec): min=586, max=42194, avg=4748.53, stdev=12124.81 00:18:37.903 lat (usec): min=599, max=42208, avg=4779.97, stdev=12131.99 00:18:37.903 clat percentiles (usec): 00:18:37.903 | 1.00th=[ 709], 5.00th=[ 717], 10.00th=[ 717], 20.00th=[ 725], 00:18:37.903 | 30.00th=[ 725], 40.00th=[ 734], 50.00th=[ 734], 60.00th=[ 742], 00:18:37.903 | 70.00th=[ 750], 80.00th=[ 758], 90.00th=[ 1254], 95.00th=[41157], 00:18:37.903 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:37.903 | 99.99th=[42206] 00:18:37.903 bw ( KiB/s): min= 96, max= 920, per=2.22%, avg=238.67, stdev=333.88, samples=6 00:18:37.903 iops : min= 24, max= 230, avg=59.67, stdev=83.47, samples=6 00:18:37.903 lat (usec) : 750=75.84%, 1000=13.76% 00:18:37.903 lat (msec) : 2=0.28%, 10=0.14%, 50=9.83% 00:18:37.903 cpu : usr=0.15%, sys=0.41%, ctx=716, majf=0, minf=1 00:18:37.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:37.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.903 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.903 issued rwts: total=712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:37.903 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=950507: Sat Jul 20 17:54:12 2024 00:18:37.903 read: IOPS=1276, BW=5104KiB/s (5227kB/s)(18.4MiB/3687msec) 00:18:37.903 slat (usec): min=5, max=20345, avg=32.77, stdev=514.43 00:18:37.903 clat (usec): min=528, max=41632, avg=746.95, stdev=1367.77 00:18:37.903 lat (usec): min=536, max=41640, avg=779.73, stdev=1461.31 00:18:37.903 clat percentiles (usec): 00:18:37.903 | 1.00th=[ 570], 5.00th=[ 594], 10.00th=[ 603], 20.00th=[ 619], 00:18:37.903 | 30.00th=[ 635], 40.00th=[ 652], 50.00th=[ 668], 60.00th=[ 685], 00:18:37.903 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 865], 95.00th=[ 1004], 00:18:37.903 | 99.00th=[ 1020], 99.50th=[ 1139], 99.90th=[40633], 99.95th=[41157], 00:18:37.903 | 99.99th=[41681] 00:18:37.903 bw ( KiB/s): min= 3360, max= 5736, per=47.29%, avg=5068.14, stdev=852.27, samples=7 00:18:37.903 iops : min= 840, max= 1434, avg=1267.00, stdev=213.05, samples=7 00:18:37.903 lat (usec) : 750=81.87%, 1000=13.03% 00:18:37.903 lat (msec) : 2=4.93%, 4=0.02%, 50=0.13% 00:18:37.903 cpu : usr=1.03%, sys=2.58%, ctx=4716, majf=0, minf=1 00:18:37.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:37.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.903 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.903 issued rwts: total=4706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:37.903 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=950508: Sat Jul 20 17:54:12 2024 00:18:37.903 read: IOPS=293, BW=1174KiB/s (1202kB/s)(3704KiB/3155msec) 00:18:37.903 slat (usec): min=5, max=16357, avg=42.36, stdev=671.45 00:18:37.903 clat (usec): min=438, max=41620, avg=3361.26, stdev=9914.19 00:18:37.903 lat (usec): min=451, max=41634, avg=3403.65, stdev=9931.30 00:18:37.903 clat percentiles (usec): 00:18:37.903 | 1.00th=[ 457], 5.00th=[ 478], 10.00th=[ 494], 20.00th=[ 529], 00:18:37.903 | 30.00th=[ 553], 40.00th=[ 611], 50.00th=[ 693], 60.00th=[ 996], 00:18:37.903 | 70.00th=[ 996], 80.00th=[ 1004], 90.00th=[ 1012], 95.00th=[41157], 00:18:37.903 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:18:37.903 | 99.99th=[41681] 00:18:37.903 bw ( KiB/s): min= 96, max= 2880, per=8.00%, avg=857.33, stdev=1081.68, samples=6 00:18:37.903 iops : min= 24, max= 720, avg=214.33, stdev=270.42, samples=6 00:18:37.903 lat (usec) : 500=12.08%, 750=39.70%, 1000=21.04% 00:18:37.903 lat (msec) : 2=20.60%, 50=6.47% 00:18:37.903 cpu : usr=0.19%, sys=0.41%, ctx=929, majf=0, minf=1 00:18:37.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:37.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.903 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.903 issued rwts: total=927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:37.903 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=950509: Sat Jul 20 17:54:12 2024 00:18:37.903 read: IOPS=1220, BW=4881KiB/s (4998kB/s)(13.8MiB/2898msec) 00:18:37.903 slat (nsec): min=5840, max=73519, avg=15611.50, stdev=9541.21 00:18:37.903 clat (usec): min=438, max=42183, avg=800.06, stdev=2497.13 00:18:37.903 lat (usec): min=446, max=42199, avg=815.68, stdev=2497.14 00:18:37.903 clat percentiles (usec): 00:18:37.903 | 1.00th=[ 449], 5.00th=[ 465], 10.00th=[ 482], 20.00th=[ 515], 00:18:37.903 | 30.00th=[ 545], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:18:37.903 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 996], 95.00th=[ 1004], 00:18:37.903 | 99.00th=[ 1020], 99.50th=[ 1385], 99.90th=[42206], 99.95th=[42206], 00:18:37.903 | 99.99th=[42206] 00:18:37.903 bw ( KiB/s): min= 4296, max= 7192, per=52.64%, avg=5641.60, stdev=1299.28, samples=5 00:18:37.903 iops : min= 1074, max= 1798, avg=1410.40, stdev=324.82, samples=5 00:18:37.903 lat (usec) : 500=15.89%, 750=66.30%, 1000=11.51% 00:18:37.903 lat (msec) : 2=5.85%, 4=0.03%, 10=0.03%, 50=0.37% 00:18:37.903 cpu : usr=1.42%, sys=2.69%, ctx=3539, majf=0, minf=1 00:18:37.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:37.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.903 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.903 issued rwts: total=3537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:37.903 00:18:37.903 Run status group 0 (all jobs): 00:18:37.903 READ: bw=10.5MiB/s (11.0MB/s), 833KiB/s-5104KiB/s (853kB/s-5227kB/s), io=38.6MiB (40.5MB), run=2898-3687msec 00:18:37.903 00:18:37.903 Disk stats (read/write): 00:18:37.903 nvme0n1: ios=693/0, merge=0/0, ticks=4428/0, in_queue=4428, util=99.51% 00:18:37.903 nvme0n2: ios=4576/0, merge=0/0, ticks=3369/0, in_queue=3369, util=94.45% 00:18:37.903 nvme0n3: ios=834/0, merge=0/0, ticks=3060/0, in_queue=3060, util=95.92% 00:18:37.903 nvme0n4: ios=3584/0, merge=0/0, ticks=2885/0, in_queue=2885, util=100.00% 00:18:38.161 17:54:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:38.161 17:54:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:38.417 17:54:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:38.417 17:54:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:38.674 17:54:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:38.674 17:54:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:38.930 17:54:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:38.930 17:54:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 950418 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:39.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:39.187 nvmf hotplug test: fio failed as expected 00:18:39.187 17:54:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:39.444 rmmod nvme_tcp 00:18:39.444 rmmod nvme_fabrics 00:18:39.444 rmmod nvme_keyring 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 947921 ']' 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 947921 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 947921 ']' 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 947921 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 947921 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 947921' 00:18:39.444 killing process with pid 947921 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 947921 00:18:39.444 17:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 947921 00:18:39.702 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:39.702 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:39.702 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:39.702 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.702 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:39.702 17:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.702 17:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.702 17:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.228 17:54:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:42.228 00:18:42.228 real 0m23.024s 00:18:42.228 user 1m18.140s 00:18:42.228 sys 0m6.572s 00:18:42.228 17:54:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:42.228 17:54:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.228 ************************************ 00:18:42.228 END TEST nvmf_fio_target 00:18:42.228 ************************************ 00:18:42.228 17:54:16 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:42.228 17:54:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:42.228 17:54:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:42.228 17:54:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:42.228 ************************************ 00:18:42.228 START TEST nvmf_bdevio 00:18:42.228 ************************************ 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:42.228 * Looking for test storage... 00:18:42.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.228 17:54:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.229 17:54:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.229 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:42.229 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:42.229 17:54:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:42.229 17:54:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:44.128 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:44.128 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.128 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:44.129 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:44.129 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:44.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:18:44.129 00:18:44.129 --- 10.0.0.2 ping statistics --- 00:18:44.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.129 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:18:44.129 00:18:44.129 --- 10.0.0.1 ping statistics --- 00:18:44.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.129 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=953266 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 953266 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 953266 ']' 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:44.129 17:54:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:44.129 [2024-07-20 17:54:18.874304] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:44.129 [2024-07-20 17:54:18.874400] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.129 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.387 [2024-07-20 17:54:18.945337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.387 [2024-07-20 17:54:19.037664] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.387 [2024-07-20 17:54:19.037726] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.387 [2024-07-20 17:54:19.037750] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.387 [2024-07-20 17:54:19.037764] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.387 [2024-07-20 17:54:19.037775] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.387 [2024-07-20 17:54:19.037878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:44.387 [2024-07-20 17:54:19.037937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:44.387 [2024-07-20 17:54:19.037991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:44.387 [2024-07-20 17:54:19.037994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.387 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:44.387 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:18:44.387 17:54:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:44.387 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.387 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:44.645 [2024-07-20 17:54:19.189652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:44.645 Malloc0 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.645 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:44.646 [2024-07-20 17:54:19.241667] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.646 { 00:18:44.646 "params": { 00:18:44.646 "name": "Nvme$subsystem", 00:18:44.646 "trtype": "$TEST_TRANSPORT", 00:18:44.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.646 "adrfam": "ipv4", 00:18:44.646 "trsvcid": "$NVMF_PORT", 00:18:44.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.646 "hdgst": ${hdgst:-false}, 00:18:44.646 "ddgst": ${ddgst:-false} 00:18:44.646 }, 00:18:44.646 "method": "bdev_nvme_attach_controller" 00:18:44.646 } 00:18:44.646 EOF 00:18:44.646 )") 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:44.646 17:54:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:44.646 "params": { 00:18:44.646 "name": "Nvme1", 00:18:44.646 "trtype": "tcp", 00:18:44.646 "traddr": "10.0.0.2", 00:18:44.646 "adrfam": "ipv4", 00:18:44.646 "trsvcid": "4420", 00:18:44.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.646 "hdgst": false, 00:18:44.646 "ddgst": false 00:18:44.646 }, 00:18:44.646 "method": "bdev_nvme_attach_controller" 00:18:44.646 }' 00:18:44.646 [2024-07-20 17:54:19.286595] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:44.646 [2024-07-20 17:54:19.286686] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953300 ] 00:18:44.646 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.646 [2024-07-20 17:54:19.347916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:44.646 [2024-07-20 17:54:19.440430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.646 [2024-07-20 17:54:19.440481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.646 [2024-07-20 17:54:19.440484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.903 I/O targets: 00:18:44.903 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:44.904 00:18:44.904 00:18:44.904 CUnit - A unit testing framework for C - Version 2.1-3 00:18:44.904 http://cunit.sourceforge.net/ 00:18:44.904 00:18:44.904 00:18:44.904 Suite: bdevio tests on: Nvme1n1 00:18:45.161 Test: blockdev write read block ...passed 00:18:45.161 Test: blockdev write zeroes read block ...passed 00:18:45.161 Test: blockdev write zeroes read no split ...passed 00:18:45.161 Test: blockdev write zeroes read split ...passed 00:18:45.161 Test: blockdev write zeroes read split partial ...passed 00:18:45.161 Test: blockdev reset ...[2024-07-20 17:54:19.889741] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:45.161 [2024-07-20 17:54:19.889856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174ca00 (9): Bad file descriptor 00:18:45.161 [2024-07-20 17:54:19.944054] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:45.161 passed 00:18:45.419 Test: blockdev write read 8 blocks ...passed 00:18:45.419 Test: blockdev write read size > 128k ...passed 00:18:45.419 Test: blockdev write read invalid size ...passed 00:18:45.419 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:45.419 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:45.419 Test: blockdev write read max offset ...passed 00:18:45.419 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:45.419 Test: blockdev writev readv 8 blocks ...passed 00:18:45.419 Test: blockdev writev readv 30 x 1block ...passed 00:18:45.419 Test: blockdev writev readv block ...passed 00:18:45.419 Test: blockdev writev readv size > 128k ...passed 00:18:45.419 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:45.419 Test: blockdev comparev and writev ...[2024-07-20 17:54:20.207489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.419 [2024-07-20 17:54:20.207544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:45.419 [2024-07-20 17:54:20.207568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.419 [2024-07-20 17:54:20.207585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.419 [2024-07-20 17:54:20.208111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.419 [2024-07-20 17:54:20.208148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:45.419 [2024-07-20 17:54:20.208170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.419 [2024-07-20 17:54:20.208186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:45.419 [2024-07-20 17:54:20.208672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.419 [2024-07-20 17:54:20.208696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:45.419 [2024-07-20 17:54:20.208747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.419 [2024-07-20 17:54:20.208766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:45.419 [2024-07-20 17:54:20.209237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.419 [2024-07-20 17:54:20.209261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:45.419 [2024-07-20 17:54:20.209283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.419 [2024-07-20 17:54:20.209299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:45.677 passed 00:18:45.677 Test: blockdev nvme passthru rw ...passed 00:18:45.677 Test: blockdev nvme passthru vendor specific ...[2024-07-20 17:54:20.292311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:45.677 [2024-07-20 17:54:20.292338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:45.677 [2024-07-20 17:54:20.292632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:45.677 [2024-07-20 17:54:20.292655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:45.677 [2024-07-20 17:54:20.292950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:45.677 [2024-07-20 17:54:20.292973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:45.677 [2024-07-20 17:54:20.293235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:45.677 [2024-07-20 17:54:20.293257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:45.677 passed 00:18:45.677 Test: blockdev nvme admin passthru ...passed 00:18:45.677 Test: blockdev copy ...passed 00:18:45.677 00:18:45.677 Run Summary: Type Total Ran Passed Failed Inactive 00:18:45.677 suites 1 1 n/a 0 0 00:18:45.677 tests 23 23 23 0 0 00:18:45.677 asserts 152 152 152 0 n/a 00:18:45.677 00:18:45.677 Elapsed time = 1.391 seconds 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.934 rmmod nvme_tcp 00:18:45.934 rmmod nvme_fabrics 00:18:45.934 rmmod nvme_keyring 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 953266 ']' 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 953266 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 953266 ']' 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 953266 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 953266 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 953266' 00:18:45.934 killing process with pid 953266 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 953266 00:18:45.934 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 953266 00:18:46.192 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:46.192 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:46.192 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:46.192 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.192 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:46.192 17:54:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.192 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.192 17:54:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.722 17:54:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:48.722 00:18:48.722 real 0m6.369s 00:18:48.722 user 0m10.354s 00:18:48.722 sys 0m2.125s 00:18:48.722 17:54:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:48.722 17:54:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:48.722 ************************************ 00:18:48.722 END TEST nvmf_bdevio 00:18:48.722 ************************************ 00:18:48.722 17:54:22 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:48.722 17:54:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:48.722 17:54:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:48.722 17:54:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:48.722 ************************************ 00:18:48.722 START TEST nvmf_auth_target 00:18:48.722 ************************************ 00:18:48.722 17:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:48.722 * Looking for test storage... 00:18:48.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:48.722 17:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.722 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:48.722 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.722 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.722 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.722 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:48.723 17:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:50.625 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:50.625 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:50.625 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:50.625 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:50.625 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:50.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:18:50.626 00:18:50.626 --- 10.0.0.2 ping statistics --- 00:18:50.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.626 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:50.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:18:50.626 00:18:50.626 --- 10.0.0.1 ping statistics --- 00:18:50.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.626 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=955383 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 955383 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 955383 ']' 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:50.626 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.883 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:50.883 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:50.883 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=955503 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4c7e45eaba2f8974d98a5f0f7ba1d7fb02b69bab7e19f413 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.C2Q 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4c7e45eaba2f8974d98a5f0f7ba1d7fb02b69bab7e19f413 0 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4c7e45eaba2f8974d98a5f0f7ba1d7fb02b69bab7e19f413 0 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4c7e45eaba2f8974d98a5f0f7ba1d7fb02b69bab7e19f413 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.C2Q 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.C2Q 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.C2Q 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=656009c4072c06afed131d5b4a1824ef9402992dad17cc1ccd7d347e711b0831 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.3mc 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 656009c4072c06afed131d5b4a1824ef9402992dad17cc1ccd7d347e711b0831 3 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 656009c4072c06afed131d5b4a1824ef9402992dad17cc1ccd7d347e711b0831 3 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=656009c4072c06afed131d5b4a1824ef9402992dad17cc1ccd7d347e711b0831 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:50.884 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.3mc 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.3mc 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.3mc 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8e48afe95d15660a73a51b2d1eaddf57 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.98j 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8e48afe95d15660a73a51b2d1eaddf57 1 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8e48afe95d15660a73a51b2d1eaddf57 1 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8e48afe95d15660a73a51b2d1eaddf57 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.98j 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.98j 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.98j 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:51.142 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7f54ae452781745fb645259978dbd5c674a0ff2897bf2273 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7cS 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7f54ae452781745fb645259978dbd5c674a0ff2897bf2273 2 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7f54ae452781745fb645259978dbd5c674a0ff2897bf2273 2 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7f54ae452781745fb645259978dbd5c674a0ff2897bf2273 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7cS 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7cS 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.7cS 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cfae38259df1773f957fe6a1ffadbe95218f9347fcbf6306 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.H1o 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cfae38259df1773f957fe6a1ffadbe95218f9347fcbf6306 2 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cfae38259df1773f957fe6a1ffadbe95218f9347fcbf6306 2 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cfae38259df1773f957fe6a1ffadbe95218f9347fcbf6306 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.H1o 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.H1o 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.H1o 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c9f49757ab74e5d8993b53ee6b68f059 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.gGr 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c9f49757ab74e5d8993b53ee6b68f059 1 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c9f49757ab74e5d8993b53ee6b68f059 1 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c9f49757ab74e5d8993b53ee6b68f059 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.gGr 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.gGr 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.gGr 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2e2c061017d7609c81e612ee1592309e4fea47cab1234f430f157c34f0056fcc 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Dvs 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2e2c061017d7609c81e612ee1592309e4fea47cab1234f430f157c34f0056fcc 3 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2e2c061017d7609c81e612ee1592309e4fea47cab1234f430f157c34f0056fcc 3 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2e2c061017d7609c81e612ee1592309e4fea47cab1234f430f157c34f0056fcc 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:51.143 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:51.401 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Dvs 00:18:51.401 17:54:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Dvs 00:18:51.401 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Dvs 00:18:51.401 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:51.401 17:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 955383 00:18:51.401 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 955383 ']' 00:18:51.401 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.401 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:51.401 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.401 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:51.401 17:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 955503 /var/tmp/host.sock 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 955503 ']' 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:51.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.658 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.916 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.916 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:51.916 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.C2Q 00:18:51.916 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.916 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.916 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.916 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.C2Q 00:18:51.916 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.C2Q 00:18:52.173 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.3mc ]] 00:18:52.173 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3mc 00:18:52.173 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.173 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.173 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.173 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3mc 00:18:52.173 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3mc 00:18:52.173 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:52.173 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.98j 00:18:52.173 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.173 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.430 17:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.430 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.98j 00:18:52.430 17:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.98j 00:18:52.687 17:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.7cS ]] 00:18:52.687 17:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7cS 00:18:52.687 17:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.687 17:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.687 17:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.687 17:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7cS 00:18:52.687 17:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7cS 00:18:52.944 17:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:52.944 17:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.H1o 00:18:52.944 17:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.944 17:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.944 17:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.944 17:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.H1o 00:18:52.944 17:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.H1o 00:18:53.201 17:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.gGr ]] 00:18:53.201 17:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gGr 00:18:53.201 17:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.201 17:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.201 17:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.201 17:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gGr 00:18:53.201 17:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gGr 00:18:53.458 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:53.458 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Dvs 00:18:53.458 17:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.458 17:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.458 17:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.458 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Dvs 00:18:53.458 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Dvs 00:18:53.714 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:53.714 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:53.714 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.714 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.714 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.714 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.972 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:53.972 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.972 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.972 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:53.972 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:53.972 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.972 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.972 17:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.972 17:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.972 17:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.972 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.972 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.229 00:18:54.229 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.229 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.229 17:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.485 17:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.485 17:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.485 17:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.485 17:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.485 17:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.485 17:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.485 { 00:18:54.485 "cntlid": 1, 00:18:54.485 "qid": 0, 00:18:54.485 "state": "enabled", 00:18:54.485 "listen_address": { 00:18:54.485 "trtype": "TCP", 00:18:54.485 "adrfam": "IPv4", 00:18:54.485 "traddr": "10.0.0.2", 00:18:54.485 "trsvcid": "4420" 00:18:54.486 }, 00:18:54.486 "peer_address": { 00:18:54.486 "trtype": "TCP", 00:18:54.486 "adrfam": "IPv4", 00:18:54.486 "traddr": "10.0.0.1", 00:18:54.486 "trsvcid": "45644" 00:18:54.486 }, 00:18:54.486 "auth": { 00:18:54.486 "state": "completed", 00:18:54.486 "digest": "sha256", 00:18:54.486 "dhgroup": "null" 00:18:54.486 } 00:18:54.486 } 00:18:54.486 ]' 00:18:54.486 17:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.486 17:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.486 17:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.486 17:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:54.486 17:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.742 17:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.742 17:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.742 17:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.999 17:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:18:55.928 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.928 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.928 17:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.928 17:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.928 17:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.928 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.928 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:55.928 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:56.185 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:56.185 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.185 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.185 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:56.185 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.185 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.185 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.185 17:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.185 17:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.185 17:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.185 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.185 17:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.442 00:18:56.442 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.442 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.442 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.699 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.699 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.699 17:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.699 17:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.699 17:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.699 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.699 { 00:18:56.699 "cntlid": 3, 00:18:56.699 "qid": 0, 00:18:56.699 "state": "enabled", 00:18:56.699 "listen_address": { 00:18:56.699 "trtype": "TCP", 00:18:56.699 "adrfam": "IPv4", 00:18:56.699 "traddr": "10.0.0.2", 00:18:56.699 "trsvcid": "4420" 00:18:56.699 }, 00:18:56.699 "peer_address": { 00:18:56.699 "trtype": "TCP", 00:18:56.699 "adrfam": "IPv4", 00:18:56.699 "traddr": "10.0.0.1", 00:18:56.699 "trsvcid": "45664" 00:18:56.699 }, 00:18:56.699 "auth": { 00:18:56.699 "state": "completed", 00:18:56.699 "digest": "sha256", 00:18:56.699 "dhgroup": "null" 00:18:56.699 } 00:18:56.699 } 00:18:56.699 ]' 00:18:56.699 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.699 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.699 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.699 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:56.699 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.700 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.700 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.700 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.957 17:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.325 17:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.325 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.582 00:18:58.582 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.582 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.582 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.880 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.880 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.880 17:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.880 17:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.880 17:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.880 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.880 { 00:18:58.880 "cntlid": 5, 00:18:58.880 "qid": 0, 00:18:58.880 "state": "enabled", 00:18:58.880 "listen_address": { 00:18:58.880 "trtype": "TCP", 00:18:58.880 "adrfam": "IPv4", 00:18:58.880 "traddr": "10.0.0.2", 00:18:58.880 "trsvcid": "4420" 00:18:58.880 }, 00:18:58.880 "peer_address": { 00:18:58.880 "trtype": "TCP", 00:18:58.880 "adrfam": "IPv4", 00:18:58.880 "traddr": "10.0.0.1", 00:18:58.880 "trsvcid": "45694" 00:18:58.880 }, 00:18:58.880 "auth": { 00:18:58.880 "state": "completed", 00:18:58.880 "digest": "sha256", 00:18:58.880 "dhgroup": "null" 00:18:58.880 } 00:18:58.880 } 00:18:58.880 ]' 00:18:58.880 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.880 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.880 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.175 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:59.175 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.175 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.175 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.175 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.175 17:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:19:00.107 17:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.107 17:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.107 17:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.107 17:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.107 17:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.107 17:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.107 17:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:00.107 17:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:00.365 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:00.365 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.365 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:00.365 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:00.365 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:00.365 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.365 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:00.365 17:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.365 17:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.365 17:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.365 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.365 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.931 00:19:00.931 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.931 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.931 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.931 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.931 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.931 17:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.931 17:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.931 17:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.931 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.931 { 00:19:00.931 "cntlid": 7, 00:19:00.931 "qid": 0, 00:19:00.931 "state": "enabled", 00:19:00.931 "listen_address": { 00:19:00.931 "trtype": "TCP", 00:19:00.931 "adrfam": "IPv4", 00:19:00.931 "traddr": "10.0.0.2", 00:19:00.931 "trsvcid": "4420" 00:19:00.931 }, 00:19:00.931 "peer_address": { 00:19:00.931 "trtype": "TCP", 00:19:00.931 "adrfam": "IPv4", 00:19:00.931 "traddr": "10.0.0.1", 00:19:00.931 "trsvcid": "35554" 00:19:00.931 }, 00:19:00.931 "auth": { 00:19:00.931 "state": "completed", 00:19:00.931 "digest": "sha256", 00:19:00.931 "dhgroup": "null" 00:19:00.931 } 00:19:00.931 } 00:19:00.931 ]' 00:19:00.931 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.188 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.188 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.188 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.188 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.188 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.188 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.188 17:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.446 17:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:19:02.378 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.378 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.378 17:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.378 17:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.378 17:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.378 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.378 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.378 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:02.378 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:02.636 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:02.636 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.636 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.636 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:02.636 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:02.636 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.636 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.636 17:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.636 17:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.636 17:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.636 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.636 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.894 00:19:02.894 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.894 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.894 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.152 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.152 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.152 17:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.152 17:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.152 17:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.152 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.152 { 00:19:03.152 "cntlid": 9, 00:19:03.152 "qid": 0, 00:19:03.152 "state": "enabled", 00:19:03.152 "listen_address": { 00:19:03.152 "trtype": "TCP", 00:19:03.152 "adrfam": "IPv4", 00:19:03.152 "traddr": "10.0.0.2", 00:19:03.152 "trsvcid": "4420" 00:19:03.152 }, 00:19:03.152 "peer_address": { 00:19:03.152 "trtype": "TCP", 00:19:03.152 "adrfam": "IPv4", 00:19:03.152 "traddr": "10.0.0.1", 00:19:03.152 "trsvcid": "35602" 00:19:03.152 }, 00:19:03.152 "auth": { 00:19:03.152 "state": "completed", 00:19:03.152 "digest": "sha256", 00:19:03.152 "dhgroup": "ffdhe2048" 00:19:03.152 } 00:19:03.152 } 00:19:03.152 ]' 00:19:03.152 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.152 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.152 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.152 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:03.152 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.409 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.409 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.409 17:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.409 17:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.782 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.040 00:19:05.040 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.040 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.040 17:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.605 { 00:19:05.605 "cntlid": 11, 00:19:05.605 "qid": 0, 00:19:05.605 "state": "enabled", 00:19:05.605 "listen_address": { 00:19:05.605 "trtype": "TCP", 00:19:05.605 "adrfam": "IPv4", 00:19:05.605 "traddr": "10.0.0.2", 00:19:05.605 "trsvcid": "4420" 00:19:05.605 }, 00:19:05.605 "peer_address": { 00:19:05.605 "trtype": "TCP", 00:19:05.605 "adrfam": "IPv4", 00:19:05.605 "traddr": "10.0.0.1", 00:19:05.605 "trsvcid": "35616" 00:19:05.605 }, 00:19:05.605 "auth": { 00:19:05.605 "state": "completed", 00:19:05.605 "digest": "sha256", 00:19:05.605 "dhgroup": "ffdhe2048" 00:19:05.605 } 00:19:05.605 } 00:19:05.605 ]' 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.605 17:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.862 17:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:19:06.795 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.795 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.795 17:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.795 17:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.795 17:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.795 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.795 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.795 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:07.053 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:07.053 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.053 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.053 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:07.053 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:07.053 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.053 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.053 17:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.053 17:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.053 17:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.053 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.053 17:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.311 00:19:07.311 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.311 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.311 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.569 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.569 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.569 17:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.569 17:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.569 17:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.569 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.569 { 00:19:07.569 "cntlid": 13, 00:19:07.569 "qid": 0, 00:19:07.569 "state": "enabled", 00:19:07.569 "listen_address": { 00:19:07.569 "trtype": "TCP", 00:19:07.569 "adrfam": "IPv4", 00:19:07.569 "traddr": "10.0.0.2", 00:19:07.569 "trsvcid": "4420" 00:19:07.569 }, 00:19:07.569 "peer_address": { 00:19:07.569 "trtype": "TCP", 00:19:07.569 "adrfam": "IPv4", 00:19:07.569 "traddr": "10.0.0.1", 00:19:07.569 "trsvcid": "35640" 00:19:07.569 }, 00:19:07.569 "auth": { 00:19:07.569 "state": "completed", 00:19:07.569 "digest": "sha256", 00:19:07.569 "dhgroup": "ffdhe2048" 00:19:07.569 } 00:19:07.569 } 00:19:07.569 ]' 00:19:07.569 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.827 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.827 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.827 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.827 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.827 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.827 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.827 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.085 17:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:19:09.018 17:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.018 17:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.018 17:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.018 17:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.018 17:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.018 17:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.018 17:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:09.018 17:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:09.274 17:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:09.274 17:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.274 17:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.274 17:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.274 17:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:09.274 17:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.274 17:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:09.274 17:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.274 17:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.274 17:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.274 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.274 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.838 00:19:09.838 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.838 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.838 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.838 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.838 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.838 17:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.838 17:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.095 17:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.095 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.095 { 00:19:10.095 "cntlid": 15, 00:19:10.095 "qid": 0, 00:19:10.095 "state": "enabled", 00:19:10.095 "listen_address": { 00:19:10.095 "trtype": "TCP", 00:19:10.095 "adrfam": "IPv4", 00:19:10.095 "traddr": "10.0.0.2", 00:19:10.095 "trsvcid": "4420" 00:19:10.095 }, 00:19:10.095 "peer_address": { 00:19:10.095 "trtype": "TCP", 00:19:10.095 "adrfam": "IPv4", 00:19:10.095 "traddr": "10.0.0.1", 00:19:10.095 "trsvcid": "44420" 00:19:10.095 }, 00:19:10.095 "auth": { 00:19:10.095 "state": "completed", 00:19:10.095 "digest": "sha256", 00:19:10.095 "dhgroup": "ffdhe2048" 00:19:10.095 } 00:19:10.095 } 00:19:10.095 ]' 00:19:10.095 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.095 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.095 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.095 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.095 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.095 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.095 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.095 17:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.352 17:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:19:11.286 17:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.286 17:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.286 17:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.286 17:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.286 17:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.286 17:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.286 17:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.286 17:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:11.286 17:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:11.546 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:11.546 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.546 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.546 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:11.546 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:11.546 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.546 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.546 17:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.546 17:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.546 17:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.546 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.546 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.113 00:19:12.113 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.113 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.113 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.371 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.371 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.371 17:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.371 17:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.371 17:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.371 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.371 { 00:19:12.371 "cntlid": 17, 00:19:12.371 "qid": 0, 00:19:12.371 "state": "enabled", 00:19:12.371 "listen_address": { 00:19:12.371 "trtype": "TCP", 00:19:12.371 "adrfam": "IPv4", 00:19:12.371 "traddr": "10.0.0.2", 00:19:12.371 "trsvcid": "4420" 00:19:12.371 }, 00:19:12.371 "peer_address": { 00:19:12.371 "trtype": "TCP", 00:19:12.371 "adrfam": "IPv4", 00:19:12.371 "traddr": "10.0.0.1", 00:19:12.371 "trsvcid": "44452" 00:19:12.371 }, 00:19:12.371 "auth": { 00:19:12.371 "state": "completed", 00:19:12.371 "digest": "sha256", 00:19:12.371 "dhgroup": "ffdhe3072" 00:19:12.371 } 00:19:12.371 } 00:19:12.371 ]' 00:19:12.371 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.371 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.371 17:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.371 17:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:12.371 17:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.371 17:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.371 17:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.371 17:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.629 17:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:19:13.560 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.560 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.560 17:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.560 17:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.560 17:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.560 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.560 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:13.560 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:13.817 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:13.817 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.817 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.817 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:13.817 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:13.817 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.817 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.817 17:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.817 17:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.817 17:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.817 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.817 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.074 00:19:14.074 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.074 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.074 17:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.638 { 00:19:14.638 "cntlid": 19, 00:19:14.638 "qid": 0, 00:19:14.638 "state": "enabled", 00:19:14.638 "listen_address": { 00:19:14.638 "trtype": "TCP", 00:19:14.638 "adrfam": "IPv4", 00:19:14.638 "traddr": "10.0.0.2", 00:19:14.638 "trsvcid": "4420" 00:19:14.638 }, 00:19:14.638 "peer_address": { 00:19:14.638 "trtype": "TCP", 00:19:14.638 "adrfam": "IPv4", 00:19:14.638 "traddr": "10.0.0.1", 00:19:14.638 "trsvcid": "44478" 00:19:14.638 }, 00:19:14.638 "auth": { 00:19:14.638 "state": "completed", 00:19:14.638 "digest": "sha256", 00:19:14.638 "dhgroup": "ffdhe3072" 00:19:14.638 } 00:19:14.638 } 00:19:14.638 ]' 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.638 17:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.895 17:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:19:15.827 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.827 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.827 17:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.827 17:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.828 17:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.828 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.828 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.828 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:16.086 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:16.086 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.086 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:16.086 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:16.086 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.086 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.086 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.086 17:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.086 17:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.086 17:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.086 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.086 17:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.343 00:19:16.343 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.343 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.343 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.600 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.600 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.600 17:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.600 17:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.600 17:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.600 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.600 { 00:19:16.600 "cntlid": 21, 00:19:16.600 "qid": 0, 00:19:16.600 "state": "enabled", 00:19:16.600 "listen_address": { 00:19:16.600 "trtype": "TCP", 00:19:16.600 "adrfam": "IPv4", 00:19:16.600 "traddr": "10.0.0.2", 00:19:16.600 "trsvcid": "4420" 00:19:16.600 }, 00:19:16.600 "peer_address": { 00:19:16.600 "trtype": "TCP", 00:19:16.600 "adrfam": "IPv4", 00:19:16.600 "traddr": "10.0.0.1", 00:19:16.600 "trsvcid": "44506" 00:19:16.600 }, 00:19:16.600 "auth": { 00:19:16.600 "state": "completed", 00:19:16.600 "digest": "sha256", 00:19:16.600 "dhgroup": "ffdhe3072" 00:19:16.600 } 00:19:16.600 } 00:19:16.600 ]' 00:19:16.600 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.600 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.600 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.600 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.600 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.858 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.858 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.858 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.116 17:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:19:18.047 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.047 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.047 17:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.047 17:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.047 17:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.047 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.047 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:18.047 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:18.305 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:18.305 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.305 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.305 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:18.305 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.305 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.305 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:18.305 17:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.305 17:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.305 17:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.305 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.305 17:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.562 00:19:18.562 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.562 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.562 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.819 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.819 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.819 17:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.819 17:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.819 17:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.819 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.819 { 00:19:18.819 "cntlid": 23, 00:19:18.819 "qid": 0, 00:19:18.819 "state": "enabled", 00:19:18.819 "listen_address": { 00:19:18.819 "trtype": "TCP", 00:19:18.819 "adrfam": "IPv4", 00:19:18.819 "traddr": "10.0.0.2", 00:19:18.819 "trsvcid": "4420" 00:19:18.819 }, 00:19:18.819 "peer_address": { 00:19:18.819 "trtype": "TCP", 00:19:18.819 "adrfam": "IPv4", 00:19:18.819 "traddr": "10.0.0.1", 00:19:18.819 "trsvcid": "44534" 00:19:18.819 }, 00:19:18.819 "auth": { 00:19:18.819 "state": "completed", 00:19:18.819 "digest": "sha256", 00:19:18.819 "dhgroup": "ffdhe3072" 00:19:18.819 } 00:19:18.819 } 00:19:18.819 ]' 00:19:18.819 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.819 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.819 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.076 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.076 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.076 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.076 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.076 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.333 17:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:19:20.265 17:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.265 17:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.265 17:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.265 17:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.265 17:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.265 17:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.265 17:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.265 17:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:20.265 17:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:20.522 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:20.522 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.522 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.523 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:20.523 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:20.523 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.523 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.523 17:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.523 17:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.523 17:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.523 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.523 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.088 00:19:21.088 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.089 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.089 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.346 { 00:19:21.346 "cntlid": 25, 00:19:21.346 "qid": 0, 00:19:21.346 "state": "enabled", 00:19:21.346 "listen_address": { 00:19:21.346 "trtype": "TCP", 00:19:21.346 "adrfam": "IPv4", 00:19:21.346 "traddr": "10.0.0.2", 00:19:21.346 "trsvcid": "4420" 00:19:21.346 }, 00:19:21.346 "peer_address": { 00:19:21.346 "trtype": "TCP", 00:19:21.346 "adrfam": "IPv4", 00:19:21.346 "traddr": "10.0.0.1", 00:19:21.346 "trsvcid": "54132" 00:19:21.346 }, 00:19:21.346 "auth": { 00:19:21.346 "state": "completed", 00:19:21.346 "digest": "sha256", 00:19:21.346 "dhgroup": "ffdhe4096" 00:19:21.346 } 00:19:21.346 } 00:19:21.346 ]' 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.346 17:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.616 17:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:19:22.548 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.548 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.548 17:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.548 17:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.548 17:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.548 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.548 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:22.548 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:22.806 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:22.806 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.806 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.806 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:22.806 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:22.806 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.806 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.806 17:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.806 17:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.806 17:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.806 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.806 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.370 00:19:23.370 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.370 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.370 17:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.628 { 00:19:23.628 "cntlid": 27, 00:19:23.628 "qid": 0, 00:19:23.628 "state": "enabled", 00:19:23.628 "listen_address": { 00:19:23.628 "trtype": "TCP", 00:19:23.628 "adrfam": "IPv4", 00:19:23.628 "traddr": "10.0.0.2", 00:19:23.628 "trsvcid": "4420" 00:19:23.628 }, 00:19:23.628 "peer_address": { 00:19:23.628 "trtype": "TCP", 00:19:23.628 "adrfam": "IPv4", 00:19:23.628 "traddr": "10.0.0.1", 00:19:23.628 "trsvcid": "54160" 00:19:23.628 }, 00:19:23.628 "auth": { 00:19:23.628 "state": "completed", 00:19:23.628 "digest": "sha256", 00:19:23.628 "dhgroup": "ffdhe4096" 00:19:23.628 } 00:19:23.628 } 00:19:23.628 ]' 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.628 17:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.886 17:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:19:24.819 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.819 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.819 17:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.819 17:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.819 17:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.819 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.819 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.819 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:25.076 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:25.076 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.076 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.076 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.076 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:25.076 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.076 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.076 17:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.076 17:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.076 17:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.076 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.077 17:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.642 00:19:25.642 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.642 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.642 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.899 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.899 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.899 17:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.899 17:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.899 17:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.899 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.899 { 00:19:25.899 "cntlid": 29, 00:19:25.899 "qid": 0, 00:19:25.899 "state": "enabled", 00:19:25.899 "listen_address": { 00:19:25.899 "trtype": "TCP", 00:19:25.899 "adrfam": "IPv4", 00:19:25.899 "traddr": "10.0.0.2", 00:19:25.899 "trsvcid": "4420" 00:19:25.899 }, 00:19:25.899 "peer_address": { 00:19:25.899 "trtype": "TCP", 00:19:25.899 "adrfam": "IPv4", 00:19:25.899 "traddr": "10.0.0.1", 00:19:25.899 "trsvcid": "54182" 00:19:25.899 }, 00:19:25.899 "auth": { 00:19:25.899 "state": "completed", 00:19:25.900 "digest": "sha256", 00:19:25.900 "dhgroup": "ffdhe4096" 00:19:25.900 } 00:19:25.900 } 00:19:25.900 ]' 00:19:25.900 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.900 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.900 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.900 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.900 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.900 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.900 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.900 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.158 17:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:19:27.090 17:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.090 17:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.090 17:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.090 17:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.090 17:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.090 17:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.090 17:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.090 17:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.347 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:27.347 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.347 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.347 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.347 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:27.347 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.347 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:27.348 17:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.348 17:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.348 17:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.348 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.348 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.964 00:19:27.964 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.964 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.964 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.250 { 00:19:28.250 "cntlid": 31, 00:19:28.250 "qid": 0, 00:19:28.250 "state": "enabled", 00:19:28.250 "listen_address": { 00:19:28.250 "trtype": "TCP", 00:19:28.250 "adrfam": "IPv4", 00:19:28.250 "traddr": "10.0.0.2", 00:19:28.250 "trsvcid": "4420" 00:19:28.250 }, 00:19:28.250 "peer_address": { 00:19:28.250 "trtype": "TCP", 00:19:28.250 "adrfam": "IPv4", 00:19:28.250 "traddr": "10.0.0.1", 00:19:28.250 "trsvcid": "54210" 00:19:28.250 }, 00:19:28.250 "auth": { 00:19:28.250 "state": "completed", 00:19:28.250 "digest": "sha256", 00:19:28.250 "dhgroup": "ffdhe4096" 00:19:28.250 } 00:19:28.250 } 00:19:28.250 ]' 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.250 17:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.507 17:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:19:29.437 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.437 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.437 17:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.437 17:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.437 17:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.437 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.437 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.437 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:29.437 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:29.694 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:29.694 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.694 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.694 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:29.694 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:29.694 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.694 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.694 17:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.694 17:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.694 17:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.694 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.694 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.256 00:19:30.256 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.256 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.256 17:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.513 17:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.514 17:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.514 17:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.514 17:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.514 17:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.514 17:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.514 { 00:19:30.514 "cntlid": 33, 00:19:30.514 "qid": 0, 00:19:30.514 "state": "enabled", 00:19:30.514 "listen_address": { 00:19:30.514 "trtype": "TCP", 00:19:30.514 "adrfam": "IPv4", 00:19:30.514 "traddr": "10.0.0.2", 00:19:30.514 "trsvcid": "4420" 00:19:30.514 }, 00:19:30.514 "peer_address": { 00:19:30.514 "trtype": "TCP", 00:19:30.514 "adrfam": "IPv4", 00:19:30.514 "traddr": "10.0.0.1", 00:19:30.514 "trsvcid": "48422" 00:19:30.514 }, 00:19:30.514 "auth": { 00:19:30.514 "state": "completed", 00:19:30.514 "digest": "sha256", 00:19:30.514 "dhgroup": "ffdhe6144" 00:19:30.514 } 00:19:30.514 } 00:19:30.514 ]' 00:19:30.514 17:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.514 17:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.514 17:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.514 17:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.514 17:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.514 17:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.514 17:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.514 17:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.770 17:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:19:31.700 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.700 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.700 17:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.700 17:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.700 17:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.700 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.700 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.700 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.957 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:31.957 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.957 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.957 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.957 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:31.957 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.957 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.957 17:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.957 17:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.957 17:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.957 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.957 17:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.520 00:19:32.520 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.520 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.520 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.778 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.778 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.778 17:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.778 17:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.778 17:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.778 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.778 { 00:19:32.778 "cntlid": 35, 00:19:32.778 "qid": 0, 00:19:32.778 "state": "enabled", 00:19:32.778 "listen_address": { 00:19:32.778 "trtype": "TCP", 00:19:32.778 "adrfam": "IPv4", 00:19:32.778 "traddr": "10.0.0.2", 00:19:32.778 "trsvcid": "4420" 00:19:32.778 }, 00:19:32.778 "peer_address": { 00:19:32.778 "trtype": "TCP", 00:19:32.778 "adrfam": "IPv4", 00:19:32.778 "traddr": "10.0.0.1", 00:19:32.778 "trsvcid": "48456" 00:19:32.778 }, 00:19:32.778 "auth": { 00:19:32.778 "state": "completed", 00:19:32.778 "digest": "sha256", 00:19:32.778 "dhgroup": "ffdhe6144" 00:19:32.778 } 00:19:32.778 } 00:19:32.778 ]' 00:19:32.778 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.036 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.036 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.036 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.036 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.036 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.036 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.036 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.294 17:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:19:34.226 17:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.226 17:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.226 17:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.226 17:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.226 17:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.226 17:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.226 17:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.226 17:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.484 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:34.484 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.484 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.484 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:34.484 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:34.484 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.484 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.484 17:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.484 17:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.484 17:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.484 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.484 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.050 00:19:35.050 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.050 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.050 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.308 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.308 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.308 17:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.308 17:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.308 17:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.308 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.308 { 00:19:35.308 "cntlid": 37, 00:19:35.308 "qid": 0, 00:19:35.308 "state": "enabled", 00:19:35.308 "listen_address": { 00:19:35.308 "trtype": "TCP", 00:19:35.308 "adrfam": "IPv4", 00:19:35.308 "traddr": "10.0.0.2", 00:19:35.308 "trsvcid": "4420" 00:19:35.308 }, 00:19:35.308 "peer_address": { 00:19:35.308 "trtype": "TCP", 00:19:35.308 "adrfam": "IPv4", 00:19:35.308 "traddr": "10.0.0.1", 00:19:35.308 "trsvcid": "48492" 00:19:35.308 }, 00:19:35.308 "auth": { 00:19:35.308 "state": "completed", 00:19:35.308 "digest": "sha256", 00:19:35.308 "dhgroup": "ffdhe6144" 00:19:35.308 } 00:19:35.308 } 00:19:35.308 ]' 00:19:35.308 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.308 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.308 17:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.308 17:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.308 17:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.308 17:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.308 17:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.308 17:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.566 17:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:19:36.497 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.497 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.497 17:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.497 17:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.497 17:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.497 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.497 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.497 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.755 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:36.755 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.755 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:36.755 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:36.755 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.755 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.755 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:36.755 17:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.755 17:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.755 17:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.755 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.755 17:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.318 00:19:37.575 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.575 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.575 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.575 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.575 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.575 17:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.575 17:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.832 17:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.832 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.832 { 00:19:37.832 "cntlid": 39, 00:19:37.832 "qid": 0, 00:19:37.832 "state": "enabled", 00:19:37.832 "listen_address": { 00:19:37.832 "trtype": "TCP", 00:19:37.832 "adrfam": "IPv4", 00:19:37.832 "traddr": "10.0.0.2", 00:19:37.832 "trsvcid": "4420" 00:19:37.832 }, 00:19:37.832 "peer_address": { 00:19:37.832 "trtype": "TCP", 00:19:37.832 "adrfam": "IPv4", 00:19:37.832 "traddr": "10.0.0.1", 00:19:37.832 "trsvcid": "48518" 00:19:37.832 }, 00:19:37.832 "auth": { 00:19:37.832 "state": "completed", 00:19:37.832 "digest": "sha256", 00:19:37.832 "dhgroup": "ffdhe6144" 00:19:37.832 } 00:19:37.832 } 00:19:37.832 ]' 00:19:37.832 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.832 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.832 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.832 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:37.832 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.832 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.833 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.833 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.090 17:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:19:39.024 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.024 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.024 17:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.024 17:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.024 17:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.024 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.024 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.024 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:39.024 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:39.280 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:39.280 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.280 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.280 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.280 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.280 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.280 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.280 17:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.280 17:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.281 17:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.281 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.281 17:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.211 00:19:40.211 17:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.211 17:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.211 17:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.468 { 00:19:40.468 "cntlid": 41, 00:19:40.468 "qid": 0, 00:19:40.468 "state": "enabled", 00:19:40.468 "listen_address": { 00:19:40.468 "trtype": "TCP", 00:19:40.468 "adrfam": "IPv4", 00:19:40.468 "traddr": "10.0.0.2", 00:19:40.468 "trsvcid": "4420" 00:19:40.468 }, 00:19:40.468 "peer_address": { 00:19:40.468 "trtype": "TCP", 00:19:40.468 "adrfam": "IPv4", 00:19:40.468 "traddr": "10.0.0.1", 00:19:40.468 "trsvcid": "33222" 00:19:40.468 }, 00:19:40.468 "auth": { 00:19:40.468 "state": "completed", 00:19:40.468 "digest": "sha256", 00:19:40.468 "dhgroup": "ffdhe8192" 00:19:40.468 } 00:19:40.468 } 00:19:40.468 ]' 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.468 17:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.032 17:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:19:41.964 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.964 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.964 17:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.964 17:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.964 17:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.964 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.964 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.964 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:42.221 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:42.221 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.221 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.221 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.221 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:42.221 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.221 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.221 17:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.221 17:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.221 17:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.221 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.221 17:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.204 00:19:43.204 17:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.204 17:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.205 17:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.205 17:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.205 17:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.205 17:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.205 17:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.205 17:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.205 17:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.205 { 00:19:43.205 "cntlid": 43, 00:19:43.205 "qid": 0, 00:19:43.205 "state": "enabled", 00:19:43.205 "listen_address": { 00:19:43.205 "trtype": "TCP", 00:19:43.205 "adrfam": "IPv4", 00:19:43.205 "traddr": "10.0.0.2", 00:19:43.205 "trsvcid": "4420" 00:19:43.205 }, 00:19:43.205 "peer_address": { 00:19:43.205 "trtype": "TCP", 00:19:43.205 "adrfam": "IPv4", 00:19:43.205 "traddr": "10.0.0.1", 00:19:43.205 "trsvcid": "33244" 00:19:43.205 }, 00:19:43.205 "auth": { 00:19:43.205 "state": "completed", 00:19:43.205 "digest": "sha256", 00:19:43.205 "dhgroup": "ffdhe8192" 00:19:43.205 } 00:19:43.205 } 00:19:43.205 ]' 00:19:43.205 17:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.205 17:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.205 17:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.462 17:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.462 17:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.462 17:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.462 17:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.462 17:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.720 17:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:19:44.651 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.651 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.651 17:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.651 17:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.651 17:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.651 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.651 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.651 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.909 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:44.909 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.909 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.909 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:44.909 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:44.909 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.909 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.909 17:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.909 17:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.909 17:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.909 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.909 17:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.842 00:19:45.842 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.842 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.842 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.099 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.100 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.100 17:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.100 17:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.100 17:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.100 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.100 { 00:19:46.100 "cntlid": 45, 00:19:46.100 "qid": 0, 00:19:46.100 "state": "enabled", 00:19:46.100 "listen_address": { 00:19:46.100 "trtype": "TCP", 00:19:46.100 "adrfam": "IPv4", 00:19:46.100 "traddr": "10.0.0.2", 00:19:46.100 "trsvcid": "4420" 00:19:46.100 }, 00:19:46.100 "peer_address": { 00:19:46.100 "trtype": "TCP", 00:19:46.100 "adrfam": "IPv4", 00:19:46.100 "traddr": "10.0.0.1", 00:19:46.100 "trsvcid": "33284" 00:19:46.100 }, 00:19:46.100 "auth": { 00:19:46.100 "state": "completed", 00:19:46.100 "digest": "sha256", 00:19:46.100 "dhgroup": "ffdhe8192" 00:19:46.100 } 00:19:46.100 } 00:19:46.100 ]' 00:19:46.100 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.100 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.100 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.100 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.100 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.100 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.100 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.100 17:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.357 17:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:19:47.290 17:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.290 17:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.290 17:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.290 17:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.290 17:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.290 17:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.290 17:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.290 17:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.548 17:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:47.548 17:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.548 17:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.548 17:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:47.548 17:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:47.548 17:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.548 17:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:47.548 17:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.548 17:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.548 17:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.548 17:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.548 17:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.483 00:19:48.483 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.483 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.483 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.739 { 00:19:48.739 "cntlid": 47, 00:19:48.739 "qid": 0, 00:19:48.739 "state": "enabled", 00:19:48.739 "listen_address": { 00:19:48.739 "trtype": "TCP", 00:19:48.739 "adrfam": "IPv4", 00:19:48.739 "traddr": "10.0.0.2", 00:19:48.739 "trsvcid": "4420" 00:19:48.739 }, 00:19:48.739 "peer_address": { 00:19:48.739 "trtype": "TCP", 00:19:48.739 "adrfam": "IPv4", 00:19:48.739 "traddr": "10.0.0.1", 00:19:48.739 "trsvcid": "33312" 00:19:48.739 }, 00:19:48.739 "auth": { 00:19:48.739 "state": "completed", 00:19:48.739 "digest": "sha256", 00:19:48.739 "dhgroup": "ffdhe8192" 00:19:48.739 } 00:19:48.739 } 00:19:48.739 ]' 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.739 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.303 17:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:19:50.232 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.233 17:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.796 00:19:50.796 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.796 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.796 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.054 { 00:19:51.054 "cntlid": 49, 00:19:51.054 "qid": 0, 00:19:51.054 "state": "enabled", 00:19:51.054 "listen_address": { 00:19:51.054 "trtype": "TCP", 00:19:51.054 "adrfam": "IPv4", 00:19:51.054 "traddr": "10.0.0.2", 00:19:51.054 "trsvcid": "4420" 00:19:51.054 }, 00:19:51.054 "peer_address": { 00:19:51.054 "trtype": "TCP", 00:19:51.054 "adrfam": "IPv4", 00:19:51.054 "traddr": "10.0.0.1", 00:19:51.054 "trsvcid": "47024" 00:19:51.054 }, 00:19:51.054 "auth": { 00:19:51.054 "state": "completed", 00:19:51.054 "digest": "sha384", 00:19:51.054 "dhgroup": "null" 00:19:51.054 } 00:19:51.054 } 00:19:51.054 ]' 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.054 17:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.311 17:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:19:52.243 17:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.243 17:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.243 17:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.243 17:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.243 17:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.243 17:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.243 17:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.243 17:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.501 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:52.501 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.501 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:52.501 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:52.501 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:52.501 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.501 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.501 17:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.501 17:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.501 17:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.501 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.501 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.758 00:19:52.758 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.758 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.758 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.015 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.015 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.015 17:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.015 17:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.273 17:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.273 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.273 { 00:19:53.273 "cntlid": 51, 00:19:53.273 "qid": 0, 00:19:53.273 "state": "enabled", 00:19:53.273 "listen_address": { 00:19:53.273 "trtype": "TCP", 00:19:53.273 "adrfam": "IPv4", 00:19:53.273 "traddr": "10.0.0.2", 00:19:53.273 "trsvcid": "4420" 00:19:53.273 }, 00:19:53.273 "peer_address": { 00:19:53.273 "trtype": "TCP", 00:19:53.273 "adrfam": "IPv4", 00:19:53.273 "traddr": "10.0.0.1", 00:19:53.273 "trsvcid": "47064" 00:19:53.273 }, 00:19:53.273 "auth": { 00:19:53.273 "state": "completed", 00:19:53.273 "digest": "sha384", 00:19:53.273 "dhgroup": "null" 00:19:53.273 } 00:19:53.273 } 00:19:53.273 ]' 00:19:53.273 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.273 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.273 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.273 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:53.273 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.273 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.273 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.273 17:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.530 17:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:19:54.462 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.462 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.462 17:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.462 17:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.462 17:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.462 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.462 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:54.462 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:54.719 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:54.719 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.719 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.719 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:54.719 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:54.719 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.719 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.719 17:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.719 17:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.719 17:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.719 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.720 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.976 00:19:54.976 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.976 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.976 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.234 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.234 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.234 17:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.234 17:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.234 17:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.234 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.234 { 00:19:55.234 "cntlid": 53, 00:19:55.234 "qid": 0, 00:19:55.234 "state": "enabled", 00:19:55.234 "listen_address": { 00:19:55.234 "trtype": "TCP", 00:19:55.234 "adrfam": "IPv4", 00:19:55.234 "traddr": "10.0.0.2", 00:19:55.234 "trsvcid": "4420" 00:19:55.234 }, 00:19:55.234 "peer_address": { 00:19:55.234 "trtype": "TCP", 00:19:55.234 "adrfam": "IPv4", 00:19:55.234 "traddr": "10.0.0.1", 00:19:55.234 "trsvcid": "47084" 00:19:55.234 }, 00:19:55.234 "auth": { 00:19:55.234 "state": "completed", 00:19:55.234 "digest": "sha384", 00:19:55.234 "dhgroup": "null" 00:19:55.234 } 00:19:55.234 } 00:19:55.234 ]' 00:19:55.234 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.234 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.234 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.234 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:55.234 17:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.492 17:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.492 17:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.492 17:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.750 17:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:19:56.682 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.682 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.682 17:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.682 17:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.682 17:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.682 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.682 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:56.682 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:56.939 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:56.939 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.939 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.939 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:56.939 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:56.939 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.939 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:56.939 17:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.939 17:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.939 17:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.939 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.939 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.196 00:19:57.196 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.196 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.196 17:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.454 { 00:19:57.454 "cntlid": 55, 00:19:57.454 "qid": 0, 00:19:57.454 "state": "enabled", 00:19:57.454 "listen_address": { 00:19:57.454 "trtype": "TCP", 00:19:57.454 "adrfam": "IPv4", 00:19:57.454 "traddr": "10.0.0.2", 00:19:57.454 "trsvcid": "4420" 00:19:57.454 }, 00:19:57.454 "peer_address": { 00:19:57.454 "trtype": "TCP", 00:19:57.454 "adrfam": "IPv4", 00:19:57.454 "traddr": "10.0.0.1", 00:19:57.454 "trsvcid": "47112" 00:19:57.454 }, 00:19:57.454 "auth": { 00:19:57.454 "state": "completed", 00:19:57.454 "digest": "sha384", 00:19:57.454 "dhgroup": "null" 00:19:57.454 } 00:19:57.454 } 00:19:57.454 ]' 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.454 17:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.712 17:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:19:58.700 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.700 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.700 17:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.700 17:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.700 17:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.700 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.701 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.701 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.701 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.958 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:58.958 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.958 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.958 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:58.958 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:58.958 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.958 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.958 17:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.958 17:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.958 17:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.958 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.958 17:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.216 00:19:59.473 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.473 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.473 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.473 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.473 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.473 17:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.473 17:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.731 17:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.731 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.731 { 00:19:59.731 "cntlid": 57, 00:19:59.731 "qid": 0, 00:19:59.731 "state": "enabled", 00:19:59.731 "listen_address": { 00:19:59.731 "trtype": "TCP", 00:19:59.731 "adrfam": "IPv4", 00:19:59.731 "traddr": "10.0.0.2", 00:19:59.731 "trsvcid": "4420" 00:19:59.731 }, 00:19:59.731 "peer_address": { 00:19:59.731 "trtype": "TCP", 00:19:59.731 "adrfam": "IPv4", 00:19:59.731 "traddr": "10.0.0.1", 00:19:59.731 "trsvcid": "47130" 00:19:59.731 }, 00:19:59.731 "auth": { 00:19:59.731 "state": "completed", 00:19:59.731 "digest": "sha384", 00:19:59.731 "dhgroup": "ffdhe2048" 00:19:59.731 } 00:19:59.731 } 00:19:59.731 ]' 00:19:59.731 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.731 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.731 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.731 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:59.731 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.731 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.731 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.731 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.989 17:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:20:00.922 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.922 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.922 17:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.922 17:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.922 17:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.922 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.922 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.922 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:01.178 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:01.178 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.178 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.178 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:01.178 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:01.178 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.178 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.178 17:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.178 17:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.178 17:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.178 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.178 17:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.435 00:20:01.435 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.435 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.435 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.693 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.693 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.693 17:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.693 17:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.693 17:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.693 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.693 { 00:20:01.693 "cntlid": 59, 00:20:01.693 "qid": 0, 00:20:01.693 "state": "enabled", 00:20:01.693 "listen_address": { 00:20:01.693 "trtype": "TCP", 00:20:01.693 "adrfam": "IPv4", 00:20:01.693 "traddr": "10.0.0.2", 00:20:01.693 "trsvcid": "4420" 00:20:01.693 }, 00:20:01.693 "peer_address": { 00:20:01.693 "trtype": "TCP", 00:20:01.693 "adrfam": "IPv4", 00:20:01.693 "traddr": "10.0.0.1", 00:20:01.693 "trsvcid": "52014" 00:20:01.693 }, 00:20:01.693 "auth": { 00:20:01.693 "state": "completed", 00:20:01.693 "digest": "sha384", 00:20:01.693 "dhgroup": "ffdhe2048" 00:20:01.693 } 00:20:01.693 } 00:20:01.693 ]' 00:20:01.693 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.966 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.966 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.967 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.967 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.967 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.967 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.967 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.224 17:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:20:03.154 17:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.154 17:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.154 17:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.154 17:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.154 17:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.154 17:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.154 17:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:03.154 17:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:03.412 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:03.412 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.412 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.412 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:03.412 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:03.412 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.412 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.412 17:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.412 17:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.412 17:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.413 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.413 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.671 00:20:03.671 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.671 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.671 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.928 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.928 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.928 17:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.928 17:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.928 17:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.928 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.928 { 00:20:03.928 "cntlid": 61, 00:20:03.928 "qid": 0, 00:20:03.928 "state": "enabled", 00:20:03.928 "listen_address": { 00:20:03.928 "trtype": "TCP", 00:20:03.928 "adrfam": "IPv4", 00:20:03.928 "traddr": "10.0.0.2", 00:20:03.928 "trsvcid": "4420" 00:20:03.928 }, 00:20:03.928 "peer_address": { 00:20:03.928 "trtype": "TCP", 00:20:03.928 "adrfam": "IPv4", 00:20:03.928 "traddr": "10.0.0.1", 00:20:03.928 "trsvcid": "52038" 00:20:03.928 }, 00:20:03.928 "auth": { 00:20:03.928 "state": "completed", 00:20:03.928 "digest": "sha384", 00:20:03.928 "dhgroup": "ffdhe2048" 00:20:03.928 } 00:20:03.928 } 00:20:03.928 ]' 00:20:03.928 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.928 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.928 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.186 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:04.186 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.186 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.186 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.186 17:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.444 17:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:20:05.377 17:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.377 17:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.377 17:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.377 17:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.377 17:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.377 17:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.377 17:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.377 17:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.635 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:05.635 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.635 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.635 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:05.635 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:05.635 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.635 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:05.635 17:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.635 17:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.635 17:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.635 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.635 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.893 00:20:05.893 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.893 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.893 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.150 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.150 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.150 17:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.150 17:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.150 17:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.150 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.150 { 00:20:06.150 "cntlid": 63, 00:20:06.150 "qid": 0, 00:20:06.150 "state": "enabled", 00:20:06.150 "listen_address": { 00:20:06.150 "trtype": "TCP", 00:20:06.150 "adrfam": "IPv4", 00:20:06.150 "traddr": "10.0.0.2", 00:20:06.150 "trsvcid": "4420" 00:20:06.150 }, 00:20:06.150 "peer_address": { 00:20:06.150 "trtype": "TCP", 00:20:06.150 "adrfam": "IPv4", 00:20:06.150 "traddr": "10.0.0.1", 00:20:06.150 "trsvcid": "52072" 00:20:06.150 }, 00:20:06.150 "auth": { 00:20:06.150 "state": "completed", 00:20:06.150 "digest": "sha384", 00:20:06.150 "dhgroup": "ffdhe2048" 00:20:06.150 } 00:20:06.150 } 00:20:06.150 ]' 00:20:06.150 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.150 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.150 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.408 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:06.408 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.408 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.408 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.408 17:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.665 17:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:20:07.597 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.598 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.598 17:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.598 17:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.598 17:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.598 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.598 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.598 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.598 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.855 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:07.856 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.856 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.856 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:07.856 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:07.856 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.856 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.856 17:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.856 17:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.856 17:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.856 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.856 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.114 00:20:08.114 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.114 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.114 17:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.376 17:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.376 17:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.376 17:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.376 17:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.376 17:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.376 17:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.376 { 00:20:08.376 "cntlid": 65, 00:20:08.376 "qid": 0, 00:20:08.376 "state": "enabled", 00:20:08.376 "listen_address": { 00:20:08.376 "trtype": "TCP", 00:20:08.376 "adrfam": "IPv4", 00:20:08.376 "traddr": "10.0.0.2", 00:20:08.376 "trsvcid": "4420" 00:20:08.376 }, 00:20:08.376 "peer_address": { 00:20:08.376 "trtype": "TCP", 00:20:08.376 "adrfam": "IPv4", 00:20:08.376 "traddr": "10.0.0.1", 00:20:08.376 "trsvcid": "52092" 00:20:08.376 }, 00:20:08.377 "auth": { 00:20:08.377 "state": "completed", 00:20:08.377 "digest": "sha384", 00:20:08.377 "dhgroup": "ffdhe3072" 00:20:08.377 } 00:20:08.377 } 00:20:08.377 ]' 00:20:08.377 17:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.377 17:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.377 17:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.635 17:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.635 17:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.635 17:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.635 17:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.635 17:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.891 17:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:20:09.825 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.825 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.825 17:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.825 17:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.825 17:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.825 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.825 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.825 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:10.083 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:10.083 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.083 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.083 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:10.083 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.083 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.083 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.083 17:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.083 17:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.083 17:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.083 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.083 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.341 00:20:10.341 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.341 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.341 17:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.599 { 00:20:10.599 "cntlid": 67, 00:20:10.599 "qid": 0, 00:20:10.599 "state": "enabled", 00:20:10.599 "listen_address": { 00:20:10.599 "trtype": "TCP", 00:20:10.599 "adrfam": "IPv4", 00:20:10.599 "traddr": "10.0.0.2", 00:20:10.599 "trsvcid": "4420" 00:20:10.599 }, 00:20:10.599 "peer_address": { 00:20:10.599 "trtype": "TCP", 00:20:10.599 "adrfam": "IPv4", 00:20:10.599 "traddr": "10.0.0.1", 00:20:10.599 "trsvcid": "54796" 00:20:10.599 }, 00:20:10.599 "auth": { 00:20:10.599 "state": "completed", 00:20:10.599 "digest": "sha384", 00:20:10.599 "dhgroup": "ffdhe3072" 00:20:10.599 } 00:20:10.599 } 00:20:10.599 ]' 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.599 17:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.858 17:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:20:11.791 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.791 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.791 17:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.791 17:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.791 17:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.791 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.791 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.791 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:12.356 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:12.356 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.356 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.356 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:12.356 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.356 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.356 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.356 17:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.356 17:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.356 17:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.356 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.356 17:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.613 00:20:12.613 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.613 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.613 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.870 { 00:20:12.870 "cntlid": 69, 00:20:12.870 "qid": 0, 00:20:12.870 "state": "enabled", 00:20:12.870 "listen_address": { 00:20:12.870 "trtype": "TCP", 00:20:12.870 "adrfam": "IPv4", 00:20:12.870 "traddr": "10.0.0.2", 00:20:12.870 "trsvcid": "4420" 00:20:12.870 }, 00:20:12.870 "peer_address": { 00:20:12.870 "trtype": "TCP", 00:20:12.870 "adrfam": "IPv4", 00:20:12.870 "traddr": "10.0.0.1", 00:20:12.870 "trsvcid": "54808" 00:20:12.870 }, 00:20:12.870 "auth": { 00:20:12.870 "state": "completed", 00:20:12.870 "digest": "sha384", 00:20:12.870 "dhgroup": "ffdhe3072" 00:20:12.870 } 00:20:12.870 } 00:20:12.870 ]' 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.870 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.169 17:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:20:14.105 17:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.105 17:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.105 17:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.105 17:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.105 17:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.105 17:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.105 17:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.105 17:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.368 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:14.368 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.368 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.368 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:14.368 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:14.368 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.368 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:14.368 17:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.368 17:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.368 17:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.368 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.368 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.625 00:20:14.625 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.625 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.625 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.883 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.883 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.883 17:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.883 17:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.883 17:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.883 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.883 { 00:20:14.883 "cntlid": 71, 00:20:14.883 "qid": 0, 00:20:14.883 "state": "enabled", 00:20:14.883 "listen_address": { 00:20:14.883 "trtype": "TCP", 00:20:14.883 "adrfam": "IPv4", 00:20:14.883 "traddr": "10.0.0.2", 00:20:14.883 "trsvcid": "4420" 00:20:14.883 }, 00:20:14.883 "peer_address": { 00:20:14.883 "trtype": "TCP", 00:20:14.883 "adrfam": "IPv4", 00:20:14.883 "traddr": "10.0.0.1", 00:20:14.883 "trsvcid": "54836" 00:20:14.883 }, 00:20:14.883 "auth": { 00:20:14.883 "state": "completed", 00:20:14.883 "digest": "sha384", 00:20:14.883 "dhgroup": "ffdhe3072" 00:20:14.883 } 00:20:14.883 } 00:20:14.883 ]' 00:20:15.140 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.140 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.140 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.140 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.140 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.140 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.140 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.140 17:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.397 17:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:20:16.329 17:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.329 17:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.329 17:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.329 17:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.329 17:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.329 17:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.329 17:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.329 17:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.329 17:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.587 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:16.587 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.587 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.587 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:16.587 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:16.587 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.587 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.587 17:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.587 17:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.587 17:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.587 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.587 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.844 00:20:16.844 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.844 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.844 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.102 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.102 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.102 17:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.102 17:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.102 17:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.102 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.102 { 00:20:17.102 "cntlid": 73, 00:20:17.102 "qid": 0, 00:20:17.102 "state": "enabled", 00:20:17.102 "listen_address": { 00:20:17.102 "trtype": "TCP", 00:20:17.102 "adrfam": "IPv4", 00:20:17.102 "traddr": "10.0.0.2", 00:20:17.102 "trsvcid": "4420" 00:20:17.102 }, 00:20:17.102 "peer_address": { 00:20:17.102 "trtype": "TCP", 00:20:17.102 "adrfam": "IPv4", 00:20:17.102 "traddr": "10.0.0.1", 00:20:17.102 "trsvcid": "54868" 00:20:17.102 }, 00:20:17.102 "auth": { 00:20:17.102 "state": "completed", 00:20:17.102 "digest": "sha384", 00:20:17.102 "dhgroup": "ffdhe4096" 00:20:17.102 } 00:20:17.102 } 00:20:17.102 ]' 00:20:17.102 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.360 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.360 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.360 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:17.360 17:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.360 17:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.360 17:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.360 17:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.618 17:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:20:18.550 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.550 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.550 17:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.550 17:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.550 17:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.550 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.550 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:18.550 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:18.808 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:18.808 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.808 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.808 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:18.808 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:18.808 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.808 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.808 17:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.808 17:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.808 17:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.808 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.808 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.371 00:20:19.371 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.371 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.371 17:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.629 { 00:20:19.629 "cntlid": 75, 00:20:19.629 "qid": 0, 00:20:19.629 "state": "enabled", 00:20:19.629 "listen_address": { 00:20:19.629 "trtype": "TCP", 00:20:19.629 "adrfam": "IPv4", 00:20:19.629 "traddr": "10.0.0.2", 00:20:19.629 "trsvcid": "4420" 00:20:19.629 }, 00:20:19.629 "peer_address": { 00:20:19.629 "trtype": "TCP", 00:20:19.629 "adrfam": "IPv4", 00:20:19.629 "traddr": "10.0.0.1", 00:20:19.629 "trsvcid": "54888" 00:20:19.629 }, 00:20:19.629 "auth": { 00:20:19.629 "state": "completed", 00:20:19.629 "digest": "sha384", 00:20:19.629 "dhgroup": "ffdhe4096" 00:20:19.629 } 00:20:19.629 } 00:20:19.629 ]' 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.629 17:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.886 17:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:20:20.820 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.820 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.820 17:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.820 17:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.820 17:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.820 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.820 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.820 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.077 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:21.077 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.077 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.077 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:21.077 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:21.077 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.077 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.077 17:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.077 17:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.077 17:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.077 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.077 17:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.641 00:20:21.641 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.641 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.641 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.898 { 00:20:21.898 "cntlid": 77, 00:20:21.898 "qid": 0, 00:20:21.898 "state": "enabled", 00:20:21.898 "listen_address": { 00:20:21.898 "trtype": "TCP", 00:20:21.898 "adrfam": "IPv4", 00:20:21.898 "traddr": "10.0.0.2", 00:20:21.898 "trsvcid": "4420" 00:20:21.898 }, 00:20:21.898 "peer_address": { 00:20:21.898 "trtype": "TCP", 00:20:21.898 "adrfam": "IPv4", 00:20:21.898 "traddr": "10.0.0.1", 00:20:21.898 "trsvcid": "56808" 00:20:21.898 }, 00:20:21.898 "auth": { 00:20:21.898 "state": "completed", 00:20:21.898 "digest": "sha384", 00:20:21.898 "dhgroup": "ffdhe4096" 00:20:21.898 } 00:20:21.898 } 00:20:21.898 ]' 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.898 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.155 17:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:20:23.088 17:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.088 17:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.088 17:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.088 17:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.088 17:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.088 17:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.088 17:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.088 17:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.346 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:23.346 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.346 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.346 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:23.346 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:23.346 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.346 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:23.346 17:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.346 17:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.346 17:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.346 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.346 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.912 00:20:23.912 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.912 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.912 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.912 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.913 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.913 17:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.913 17:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.913 17:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.913 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.913 { 00:20:23.913 "cntlid": 79, 00:20:23.913 "qid": 0, 00:20:23.913 "state": "enabled", 00:20:23.913 "listen_address": { 00:20:23.913 "trtype": "TCP", 00:20:23.913 "adrfam": "IPv4", 00:20:23.913 "traddr": "10.0.0.2", 00:20:23.913 "trsvcid": "4420" 00:20:23.913 }, 00:20:23.913 "peer_address": { 00:20:23.913 "trtype": "TCP", 00:20:23.913 "adrfam": "IPv4", 00:20:23.913 "traddr": "10.0.0.1", 00:20:23.913 "trsvcid": "56846" 00:20:23.913 }, 00:20:23.913 "auth": { 00:20:23.913 "state": "completed", 00:20:23.913 "digest": "sha384", 00:20:23.913 "dhgroup": "ffdhe4096" 00:20:23.913 } 00:20:23.913 } 00:20:23.913 ]' 00:20:23.913 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.171 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.171 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.171 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.171 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.171 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.171 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.171 17:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.429 17:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:20:25.364 17:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.364 17:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.364 17:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.364 17:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.364 17:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.364 17:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.364 17:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.364 17:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:25.364 17:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:25.622 17:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:25.622 17:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.622 17:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.622 17:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:25.622 17:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:25.622 17:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.622 17:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.622 17:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.622 17:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.622 17:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.622 17:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.622 17:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.194 00:20:26.194 17:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.194 17:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.194 17:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.457 { 00:20:26.457 "cntlid": 81, 00:20:26.457 "qid": 0, 00:20:26.457 "state": "enabled", 00:20:26.457 "listen_address": { 00:20:26.457 "trtype": "TCP", 00:20:26.457 "adrfam": "IPv4", 00:20:26.457 "traddr": "10.0.0.2", 00:20:26.457 "trsvcid": "4420" 00:20:26.457 }, 00:20:26.457 "peer_address": { 00:20:26.457 "trtype": "TCP", 00:20:26.457 "adrfam": "IPv4", 00:20:26.457 "traddr": "10.0.0.1", 00:20:26.457 "trsvcid": "56862" 00:20:26.457 }, 00:20:26.457 "auth": { 00:20:26.457 "state": "completed", 00:20:26.457 "digest": "sha384", 00:20:26.457 "dhgroup": "ffdhe6144" 00:20:26.457 } 00:20:26.457 } 00:20:26.457 ]' 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.457 17:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.713 17:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:20:27.645 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.645 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.645 17:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.645 17:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.645 17:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.645 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.646 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.646 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.903 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:27.903 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.903 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.903 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:27.903 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:27.903 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.903 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.903 17:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.903 17:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.903 17:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.903 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.903 17:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.487 00:20:28.487 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.487 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.487 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.745 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.745 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.745 17:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.745 17:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.745 17:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.745 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.745 { 00:20:28.745 "cntlid": 83, 00:20:28.745 "qid": 0, 00:20:28.745 "state": "enabled", 00:20:28.745 "listen_address": { 00:20:28.745 "trtype": "TCP", 00:20:28.745 "adrfam": "IPv4", 00:20:28.745 "traddr": "10.0.0.2", 00:20:28.745 "trsvcid": "4420" 00:20:28.745 }, 00:20:28.745 "peer_address": { 00:20:28.745 "trtype": "TCP", 00:20:28.745 "adrfam": "IPv4", 00:20:28.745 "traddr": "10.0.0.1", 00:20:28.745 "trsvcid": "56878" 00:20:28.745 }, 00:20:28.745 "auth": { 00:20:28.745 "state": "completed", 00:20:28.745 "digest": "sha384", 00:20:28.745 "dhgroup": "ffdhe6144" 00:20:28.745 } 00:20:28.745 } 00:20:28.745 ]' 00:20:28.745 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.745 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.745 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.002 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.003 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.003 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.003 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.003 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.259 17:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:20:30.192 17:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.192 17:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.192 17:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.192 17:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.192 17:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.192 17:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.192 17:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.192 17:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.449 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:30.449 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.449 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.449 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:30.449 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:30.449 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.449 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.449 17:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.449 17:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.449 17:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.449 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.449 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.012 00:20:31.012 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.012 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.012 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.269 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.269 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.269 17:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.269 17:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.269 17:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.269 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.269 { 00:20:31.269 "cntlid": 85, 00:20:31.269 "qid": 0, 00:20:31.269 "state": "enabled", 00:20:31.269 "listen_address": { 00:20:31.269 "trtype": "TCP", 00:20:31.269 "adrfam": "IPv4", 00:20:31.269 "traddr": "10.0.0.2", 00:20:31.269 "trsvcid": "4420" 00:20:31.269 }, 00:20:31.269 "peer_address": { 00:20:31.269 "trtype": "TCP", 00:20:31.269 "adrfam": "IPv4", 00:20:31.269 "traddr": "10.0.0.1", 00:20:31.269 "trsvcid": "36640" 00:20:31.269 }, 00:20:31.269 "auth": { 00:20:31.269 "state": "completed", 00:20:31.269 "digest": "sha384", 00:20:31.269 "dhgroup": "ffdhe6144" 00:20:31.269 } 00:20:31.269 } 00:20:31.269 ]' 00:20:31.269 17:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.269 17:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.269 17:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.269 17:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.269 17:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.526 17:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.526 17:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.526 17:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.784 17:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:32.721 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.979 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:32.979 17:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.979 17:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.979 17:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.979 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.980 17:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.546 00:20:33.546 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.546 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.546 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.546 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.546 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.546 17:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.546 17:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.804 17:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.804 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.804 { 00:20:33.804 "cntlid": 87, 00:20:33.804 "qid": 0, 00:20:33.804 "state": "enabled", 00:20:33.804 "listen_address": { 00:20:33.804 "trtype": "TCP", 00:20:33.804 "adrfam": "IPv4", 00:20:33.804 "traddr": "10.0.0.2", 00:20:33.804 "trsvcid": "4420" 00:20:33.804 }, 00:20:33.804 "peer_address": { 00:20:33.804 "trtype": "TCP", 00:20:33.804 "adrfam": "IPv4", 00:20:33.804 "traddr": "10.0.0.1", 00:20:33.804 "trsvcid": "36662" 00:20:33.804 }, 00:20:33.804 "auth": { 00:20:33.804 "state": "completed", 00:20:33.804 "digest": "sha384", 00:20:33.804 "dhgroup": "ffdhe6144" 00:20:33.804 } 00:20:33.804 } 00:20:33.804 ]' 00:20:33.804 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.804 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.804 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.804 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:33.804 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.804 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.804 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.804 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.062 17:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:20:34.996 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.996 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.996 17:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.996 17:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.996 17:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.996 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.996 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.996 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.996 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.254 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:35.254 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.254 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.254 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:35.254 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.254 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.254 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.254 17:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.254 17:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.254 17:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.254 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.254 17:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.189 00:20:36.189 17:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.189 17:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.189 17:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.446 17:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.446 17:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.446 17:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.446 17:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.446 17:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.446 17:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.446 { 00:20:36.446 "cntlid": 89, 00:20:36.446 "qid": 0, 00:20:36.446 "state": "enabled", 00:20:36.446 "listen_address": { 00:20:36.446 "trtype": "TCP", 00:20:36.446 "adrfam": "IPv4", 00:20:36.446 "traddr": "10.0.0.2", 00:20:36.446 "trsvcid": "4420" 00:20:36.446 }, 00:20:36.446 "peer_address": { 00:20:36.446 "trtype": "TCP", 00:20:36.446 "adrfam": "IPv4", 00:20:36.446 "traddr": "10.0.0.1", 00:20:36.446 "trsvcid": "36704" 00:20:36.446 }, 00:20:36.446 "auth": { 00:20:36.446 "state": "completed", 00:20:36.446 "digest": "sha384", 00:20:36.447 "dhgroup": "ffdhe8192" 00:20:36.447 } 00:20:36.447 } 00:20:36.447 ]' 00:20:36.447 17:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.447 17:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.447 17:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.447 17:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.447 17:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.447 17:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.447 17:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.447 17:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.704 17:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:20:37.635 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.635 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.635 17:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.635 17:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.635 17:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.635 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.635 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:37.635 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:37.893 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:37.893 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.893 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:37.893 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:37.893 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:37.893 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.893 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.893 17:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.893 17:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.893 17:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.893 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.893 17:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.826 00:20:38.826 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.826 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.826 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.083 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.083 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.083 17:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.083 17:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.083 17:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.083 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.083 { 00:20:39.083 "cntlid": 91, 00:20:39.083 "qid": 0, 00:20:39.084 "state": "enabled", 00:20:39.084 "listen_address": { 00:20:39.084 "trtype": "TCP", 00:20:39.084 "adrfam": "IPv4", 00:20:39.084 "traddr": "10.0.0.2", 00:20:39.084 "trsvcid": "4420" 00:20:39.084 }, 00:20:39.084 "peer_address": { 00:20:39.084 "trtype": "TCP", 00:20:39.084 "adrfam": "IPv4", 00:20:39.084 "traddr": "10.0.0.1", 00:20:39.084 "trsvcid": "36734" 00:20:39.084 }, 00:20:39.084 "auth": { 00:20:39.084 "state": "completed", 00:20:39.084 "digest": "sha384", 00:20:39.084 "dhgroup": "ffdhe8192" 00:20:39.084 } 00:20:39.084 } 00:20:39.084 ]' 00:20:39.084 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.084 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.084 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.084 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.084 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.084 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.084 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.084 17:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.341 17:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.714 17:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.647 00:20:41.647 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.647 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.647 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.904 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.904 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.904 17:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.904 17:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.904 17:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.905 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.905 { 00:20:41.905 "cntlid": 93, 00:20:41.905 "qid": 0, 00:20:41.905 "state": "enabled", 00:20:41.905 "listen_address": { 00:20:41.905 "trtype": "TCP", 00:20:41.905 "adrfam": "IPv4", 00:20:41.905 "traddr": "10.0.0.2", 00:20:41.905 "trsvcid": "4420" 00:20:41.905 }, 00:20:41.905 "peer_address": { 00:20:41.905 "trtype": "TCP", 00:20:41.905 "adrfam": "IPv4", 00:20:41.905 "traddr": "10.0.0.1", 00:20:41.905 "trsvcid": "45140" 00:20:41.905 }, 00:20:41.905 "auth": { 00:20:41.905 "state": "completed", 00:20:41.905 "digest": "sha384", 00:20:41.905 "dhgroup": "ffdhe8192" 00:20:41.905 } 00:20:41.905 } 00:20:41.905 ]' 00:20:41.905 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.905 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.905 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.905 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.905 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.905 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.905 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.905 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.163 17:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:20:43.097 17:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.097 17:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.097 17:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.097 17:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.097 17:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.097 17:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.097 17:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.097 17:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.355 17:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:43.355 17:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.355 17:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.355 17:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:43.355 17:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.355 17:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.355 17:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:43.355 17:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.355 17:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.355 17:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.355 17:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.355 17:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.332 00:20:44.332 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.332 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.332 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.589 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.589 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.589 17:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.589 17:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.589 17:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.589 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.589 { 00:20:44.589 "cntlid": 95, 00:20:44.589 "qid": 0, 00:20:44.589 "state": "enabled", 00:20:44.589 "listen_address": { 00:20:44.589 "trtype": "TCP", 00:20:44.589 "adrfam": "IPv4", 00:20:44.589 "traddr": "10.0.0.2", 00:20:44.589 "trsvcid": "4420" 00:20:44.589 }, 00:20:44.589 "peer_address": { 00:20:44.589 "trtype": "TCP", 00:20:44.589 "adrfam": "IPv4", 00:20:44.589 "traddr": "10.0.0.1", 00:20:44.589 "trsvcid": "45170" 00:20:44.590 }, 00:20:44.590 "auth": { 00:20:44.590 "state": "completed", 00:20:44.590 "digest": "sha384", 00:20:44.590 "dhgroup": "ffdhe8192" 00:20:44.590 } 00:20:44.590 } 00:20:44.590 ]' 00:20:44.590 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.590 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.590 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.590 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.590 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.847 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.847 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.847 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.105 17:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:20:46.038 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.038 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.038 17:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.038 17:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.038 17:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.038 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:46.038 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.038 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.038 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:46.038 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:46.296 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:46.296 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.296 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.296 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:46.296 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:46.296 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.296 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.296 17:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.296 17:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.296 17:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.296 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.296 17:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.554 00:20:46.554 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.554 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.554 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.811 { 00:20:46.811 "cntlid": 97, 00:20:46.811 "qid": 0, 00:20:46.811 "state": "enabled", 00:20:46.811 "listen_address": { 00:20:46.811 "trtype": "TCP", 00:20:46.811 "adrfam": "IPv4", 00:20:46.811 "traddr": "10.0.0.2", 00:20:46.811 "trsvcid": "4420" 00:20:46.811 }, 00:20:46.811 "peer_address": { 00:20:46.811 "trtype": "TCP", 00:20:46.811 "adrfam": "IPv4", 00:20:46.811 "traddr": "10.0.0.1", 00:20:46.811 "trsvcid": "45190" 00:20:46.811 }, 00:20:46.811 "auth": { 00:20:46.811 "state": "completed", 00:20:46.811 "digest": "sha512", 00:20:46.811 "dhgroup": "null" 00:20:46.811 } 00:20:46.811 } 00:20:46.811 ]' 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.811 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.069 17:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:20:48.003 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.003 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.003 17:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.003 17:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.003 17:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.003 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.003 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:48.003 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:48.261 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:48.261 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.261 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.261 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:48.261 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:48.261 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.261 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.261 17:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.261 17:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.261 17:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.261 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.261 17:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.518 00:20:48.518 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.518 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.518 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.775 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.775 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.775 17:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.775 17:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.775 17:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.775 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.775 { 00:20:48.775 "cntlid": 99, 00:20:48.775 "qid": 0, 00:20:48.775 "state": "enabled", 00:20:48.775 "listen_address": { 00:20:48.775 "trtype": "TCP", 00:20:48.775 "adrfam": "IPv4", 00:20:48.775 "traddr": "10.0.0.2", 00:20:48.775 "trsvcid": "4420" 00:20:48.775 }, 00:20:48.775 "peer_address": { 00:20:48.775 "trtype": "TCP", 00:20:48.775 "adrfam": "IPv4", 00:20:48.775 "traddr": "10.0.0.1", 00:20:48.775 "trsvcid": "45210" 00:20:48.775 }, 00:20:48.775 "auth": { 00:20:48.775 "state": "completed", 00:20:48.775 "digest": "sha512", 00:20:48.775 "dhgroup": "null" 00:20:48.775 } 00:20:48.775 } 00:20:48.775 ]' 00:20:48.775 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.032 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.032 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.032 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:49.032 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.032 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.032 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.032 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.290 17:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:20:50.221 17:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.221 17:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.221 17:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.221 17:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.221 17:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.221 17:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.221 17:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.221 17:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.478 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:50.478 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.478 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.478 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:50.478 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:50.478 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.478 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.478 17:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.478 17:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.478 17:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.478 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.478 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.735 00:20:50.735 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.735 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.735 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.992 { 00:20:50.992 "cntlid": 101, 00:20:50.992 "qid": 0, 00:20:50.992 "state": "enabled", 00:20:50.992 "listen_address": { 00:20:50.992 "trtype": "TCP", 00:20:50.992 "adrfam": "IPv4", 00:20:50.992 "traddr": "10.0.0.2", 00:20:50.992 "trsvcid": "4420" 00:20:50.992 }, 00:20:50.992 "peer_address": { 00:20:50.992 "trtype": "TCP", 00:20:50.992 "adrfam": "IPv4", 00:20:50.992 "traddr": "10.0.0.1", 00:20:50.992 "trsvcid": "50618" 00:20:50.992 }, 00:20:50.992 "auth": { 00:20:50.992 "state": "completed", 00:20:50.992 "digest": "sha512", 00:20:50.992 "dhgroup": "null" 00:20:50.992 } 00:20:50.992 } 00:20:50.992 ]' 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.992 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.250 17:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:20:52.182 17:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.182 17:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.182 17:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.182 17:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.182 17:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.182 17:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.182 17:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.182 17:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.439 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:52.439 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.439 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:52.439 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:52.439 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:52.439 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.439 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:52.439 17:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.439 17:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.439 17:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.439 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.439 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.003 00:20:53.003 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.003 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.003 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.261 { 00:20:53.261 "cntlid": 103, 00:20:53.261 "qid": 0, 00:20:53.261 "state": "enabled", 00:20:53.261 "listen_address": { 00:20:53.261 "trtype": "TCP", 00:20:53.261 "adrfam": "IPv4", 00:20:53.261 "traddr": "10.0.0.2", 00:20:53.261 "trsvcid": "4420" 00:20:53.261 }, 00:20:53.261 "peer_address": { 00:20:53.261 "trtype": "TCP", 00:20:53.261 "adrfam": "IPv4", 00:20:53.261 "traddr": "10.0.0.1", 00:20:53.261 "trsvcid": "50650" 00:20:53.261 }, 00:20:53.261 "auth": { 00:20:53.261 "state": "completed", 00:20:53.261 "digest": "sha512", 00:20:53.261 "dhgroup": "null" 00:20:53.261 } 00:20:53.261 } 00:20:53.261 ]' 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.261 17:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.519 17:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:20:54.450 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.450 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.450 17:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.450 17:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.450 17:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.450 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.450 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.450 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:54.451 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:54.709 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:54.709 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.709 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:54.709 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:54.709 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:54.709 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.709 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.709 17:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.709 17:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.709 17:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.709 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.709 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.967 00:20:54.967 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.967 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.967 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.250 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.250 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.250 17:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.251 17:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.251 17:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.251 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.251 { 00:20:55.251 "cntlid": 105, 00:20:55.251 "qid": 0, 00:20:55.251 "state": "enabled", 00:20:55.251 "listen_address": { 00:20:55.251 "trtype": "TCP", 00:20:55.251 "adrfam": "IPv4", 00:20:55.251 "traddr": "10.0.0.2", 00:20:55.251 "trsvcid": "4420" 00:20:55.251 }, 00:20:55.251 "peer_address": { 00:20:55.251 "trtype": "TCP", 00:20:55.251 "adrfam": "IPv4", 00:20:55.251 "traddr": "10.0.0.1", 00:20:55.251 "trsvcid": "50664" 00:20:55.251 }, 00:20:55.251 "auth": { 00:20:55.251 "state": "completed", 00:20:55.251 "digest": "sha512", 00:20:55.251 "dhgroup": "ffdhe2048" 00:20:55.251 } 00:20:55.251 } 00:20:55.251 ]' 00:20:55.251 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.251 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.251 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.251 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:55.251 17:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.251 17:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.251 17:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.251 17:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.508 17:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:20:56.443 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.443 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.443 17:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.443 17:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.443 17:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.443 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.443 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:56.443 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:56.700 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:56.700 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.700 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:56.700 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:56.700 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:56.700 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.700 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.700 17:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.700 17:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.700 17:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.700 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.700 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.264 00:20:57.264 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.264 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.264 17:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.265 17:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.265 17:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.265 17:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.265 17:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.265 17:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.265 17:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.265 { 00:20:57.265 "cntlid": 107, 00:20:57.265 "qid": 0, 00:20:57.265 "state": "enabled", 00:20:57.265 "listen_address": { 00:20:57.265 "trtype": "TCP", 00:20:57.265 "adrfam": "IPv4", 00:20:57.265 "traddr": "10.0.0.2", 00:20:57.265 "trsvcid": "4420" 00:20:57.265 }, 00:20:57.265 "peer_address": { 00:20:57.265 "trtype": "TCP", 00:20:57.265 "adrfam": "IPv4", 00:20:57.265 "traddr": "10.0.0.1", 00:20:57.265 "trsvcid": "50688" 00:20:57.265 }, 00:20:57.265 "auth": { 00:20:57.265 "state": "completed", 00:20:57.265 "digest": "sha512", 00:20:57.265 "dhgroup": "ffdhe2048" 00:20:57.265 } 00:20:57.265 } 00:20:57.265 ]' 00:20:57.265 17:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.522 17:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.522 17:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.522 17:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.522 17:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.522 17:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.522 17:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.522 17:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.779 17:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:20:58.710 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.710 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.710 17:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.710 17:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.710 17:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.710 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.710 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:58.710 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:58.968 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:58.968 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.968 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:58.968 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:58.968 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:58.968 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.968 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.968 17:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.968 17:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.968 17:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.968 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.968 17:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.570 00:20:59.570 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.570 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.570 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.570 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.570 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.570 17:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.570 17:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.570 17:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.570 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.570 { 00:20:59.570 "cntlid": 109, 00:20:59.570 "qid": 0, 00:20:59.570 "state": "enabled", 00:20:59.570 "listen_address": { 00:20:59.570 "trtype": "TCP", 00:20:59.570 "adrfam": "IPv4", 00:20:59.570 "traddr": "10.0.0.2", 00:20:59.570 "trsvcid": "4420" 00:20:59.570 }, 00:20:59.570 "peer_address": { 00:20:59.570 "trtype": "TCP", 00:20:59.570 "adrfam": "IPv4", 00:20:59.570 "traddr": "10.0.0.1", 00:20:59.570 "trsvcid": "54668" 00:20:59.570 }, 00:20:59.570 "auth": { 00:20:59.570 "state": "completed", 00:20:59.570 "digest": "sha512", 00:20:59.570 "dhgroup": "ffdhe2048" 00:20:59.570 } 00:20:59.570 } 00:20:59.570 ]' 00:20:59.570 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.570 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.570 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.827 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:59.827 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.827 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.827 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.827 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.084 17:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:21:01.016 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.016 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.016 17:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.016 17:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.016 17:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.016 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.016 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.016 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.275 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:01.275 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.275 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.275 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:01.275 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.275 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.275 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:01.275 17:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.275 17:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.275 17:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.275 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.275 17:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.532 00:21:01.532 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.532 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.532 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.789 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.789 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.789 17:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.789 17:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.789 17:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.789 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.789 { 00:21:01.789 "cntlid": 111, 00:21:01.789 "qid": 0, 00:21:01.789 "state": "enabled", 00:21:01.789 "listen_address": { 00:21:01.789 "trtype": "TCP", 00:21:01.789 "adrfam": "IPv4", 00:21:01.789 "traddr": "10.0.0.2", 00:21:01.789 "trsvcid": "4420" 00:21:01.789 }, 00:21:01.789 "peer_address": { 00:21:01.789 "trtype": "TCP", 00:21:01.789 "adrfam": "IPv4", 00:21:01.789 "traddr": "10.0.0.1", 00:21:01.789 "trsvcid": "54694" 00:21:01.789 }, 00:21:01.789 "auth": { 00:21:01.789 "state": "completed", 00:21:01.789 "digest": "sha512", 00:21:01.789 "dhgroup": "ffdhe2048" 00:21:01.789 } 00:21:01.789 } 00:21:01.789 ]' 00:21:01.789 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.046 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.046 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.046 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.046 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.046 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.046 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.046 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.302 17:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:21:03.233 17:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.233 17:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.233 17:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.233 17:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.233 17:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.233 17:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.233 17:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.233 17:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:03.233 17:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:03.491 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:03.491 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.491 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.491 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:03.491 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:03.491 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.491 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.491 17:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.491 17:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.491 17:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.491 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.491 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.748 00:21:03.748 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.748 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.748 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.005 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.005 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.005 17:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.005 17:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.005 17:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.005 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.005 { 00:21:04.005 "cntlid": 113, 00:21:04.005 "qid": 0, 00:21:04.005 "state": "enabled", 00:21:04.005 "listen_address": { 00:21:04.005 "trtype": "TCP", 00:21:04.005 "adrfam": "IPv4", 00:21:04.005 "traddr": "10.0.0.2", 00:21:04.005 "trsvcid": "4420" 00:21:04.005 }, 00:21:04.005 "peer_address": { 00:21:04.005 "trtype": "TCP", 00:21:04.005 "adrfam": "IPv4", 00:21:04.005 "traddr": "10.0.0.1", 00:21:04.005 "trsvcid": "54720" 00:21:04.005 }, 00:21:04.005 "auth": { 00:21:04.005 "state": "completed", 00:21:04.005 "digest": "sha512", 00:21:04.005 "dhgroup": "ffdhe3072" 00:21:04.005 } 00:21:04.005 } 00:21:04.005 ]' 00:21:04.005 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.262 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.262 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.262 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:04.262 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.262 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.262 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.262 17:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.520 17:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:21:05.452 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.452 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.452 17:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.452 17:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.452 17:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.452 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.452 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:05.452 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:05.710 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:05.710 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.710 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:05.710 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:05.710 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:05.710 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.710 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.710 17:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.710 17:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.710 17:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.710 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.710 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.275 00:21:06.275 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.275 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.275 17:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.275 17:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.275 17:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.275 17:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.275 17:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.275 17:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.275 17:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.275 { 00:21:06.275 "cntlid": 115, 00:21:06.275 "qid": 0, 00:21:06.275 "state": "enabled", 00:21:06.275 "listen_address": { 00:21:06.275 "trtype": "TCP", 00:21:06.275 "adrfam": "IPv4", 00:21:06.275 "traddr": "10.0.0.2", 00:21:06.276 "trsvcid": "4420" 00:21:06.276 }, 00:21:06.276 "peer_address": { 00:21:06.276 "trtype": "TCP", 00:21:06.276 "adrfam": "IPv4", 00:21:06.276 "traddr": "10.0.0.1", 00:21:06.276 "trsvcid": "54744" 00:21:06.276 }, 00:21:06.276 "auth": { 00:21:06.276 "state": "completed", 00:21:06.276 "digest": "sha512", 00:21:06.276 "dhgroup": "ffdhe3072" 00:21:06.276 } 00:21:06.276 } 00:21:06.276 ]' 00:21:06.276 17:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.533 17:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.533 17:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.533 17:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:06.533 17:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.533 17:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.533 17:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.533 17:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.790 17:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:21:07.723 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.723 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.723 17:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.723 17:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.723 17:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.723 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.723 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.723 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.981 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:07.981 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.981 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:07.981 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:07.981 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:07.981 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.981 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.981 17:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.981 17:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.981 17:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.981 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.981 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.239 00:21:08.239 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.239 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.239 17:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.497 17:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.497 17:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.497 17:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.497 17:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.497 17:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.497 17:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.497 { 00:21:08.497 "cntlid": 117, 00:21:08.497 "qid": 0, 00:21:08.497 "state": "enabled", 00:21:08.497 "listen_address": { 00:21:08.497 "trtype": "TCP", 00:21:08.497 "adrfam": "IPv4", 00:21:08.497 "traddr": "10.0.0.2", 00:21:08.497 "trsvcid": "4420" 00:21:08.497 }, 00:21:08.497 "peer_address": { 00:21:08.497 "trtype": "TCP", 00:21:08.497 "adrfam": "IPv4", 00:21:08.497 "traddr": "10.0.0.1", 00:21:08.497 "trsvcid": "54766" 00:21:08.497 }, 00:21:08.497 "auth": { 00:21:08.497 "state": "completed", 00:21:08.497 "digest": "sha512", 00:21:08.497 "dhgroup": "ffdhe3072" 00:21:08.497 } 00:21:08.497 } 00:21:08.497 ]' 00:21:08.497 17:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.497 17:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.497 17:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.754 17:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.754 17:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.754 17:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.754 17:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.754 17:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.012 17:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:21:09.944 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.944 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.944 17:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.944 17:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.944 17:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.944 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.944 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.944 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.202 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:10.202 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.202 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:10.202 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:10.202 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:10.202 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.202 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:10.202 17:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.202 17:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.202 17:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.202 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.202 17:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.459 00:21:10.459 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.459 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.459 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.717 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.717 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.717 17:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.717 17:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.717 17:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.717 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.717 { 00:21:10.717 "cntlid": 119, 00:21:10.717 "qid": 0, 00:21:10.717 "state": "enabled", 00:21:10.717 "listen_address": { 00:21:10.717 "trtype": "TCP", 00:21:10.717 "adrfam": "IPv4", 00:21:10.717 "traddr": "10.0.0.2", 00:21:10.717 "trsvcid": "4420" 00:21:10.717 }, 00:21:10.717 "peer_address": { 00:21:10.717 "trtype": "TCP", 00:21:10.717 "adrfam": "IPv4", 00:21:10.717 "traddr": "10.0.0.1", 00:21:10.717 "trsvcid": "37170" 00:21:10.717 }, 00:21:10.717 "auth": { 00:21:10.717 "state": "completed", 00:21:10.717 "digest": "sha512", 00:21:10.717 "dhgroup": "ffdhe3072" 00:21:10.717 } 00:21:10.717 } 00:21:10.717 ]' 00:21:10.717 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.717 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.717 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.717 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.717 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.974 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.974 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.974 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.231 17:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:21:12.162 17:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.162 17:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.162 17:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.162 17:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.162 17:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.162 17:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.162 17:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.162 17:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:12.162 17:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:12.420 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:12.420 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.420 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:12.420 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:12.420 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:12.420 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.420 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.420 17:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.420 17:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.420 17:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.420 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.420 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.676 00:21:12.933 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.933 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.933 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.933 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.933 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.933 17:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.933 17:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.192 17:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.192 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.192 { 00:21:13.192 "cntlid": 121, 00:21:13.192 "qid": 0, 00:21:13.192 "state": "enabled", 00:21:13.192 "listen_address": { 00:21:13.192 "trtype": "TCP", 00:21:13.192 "adrfam": "IPv4", 00:21:13.192 "traddr": "10.0.0.2", 00:21:13.192 "trsvcid": "4420" 00:21:13.192 }, 00:21:13.192 "peer_address": { 00:21:13.192 "trtype": "TCP", 00:21:13.192 "adrfam": "IPv4", 00:21:13.192 "traddr": "10.0.0.1", 00:21:13.192 "trsvcid": "37194" 00:21:13.192 }, 00:21:13.192 "auth": { 00:21:13.192 "state": "completed", 00:21:13.192 "digest": "sha512", 00:21:13.192 "dhgroup": "ffdhe4096" 00:21:13.192 } 00:21:13.192 } 00:21:13.192 ]' 00:21:13.192 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.192 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.192 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.192 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:13.192 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.192 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.192 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.192 17:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.450 17:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:21:14.409 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.409 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.409 17:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.409 17:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.409 17:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.409 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.409 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:14.409 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:14.666 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:14.666 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.666 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:14.666 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:14.666 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:14.666 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.666 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.666 17:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.666 17:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.666 17:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.666 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.666 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.923 00:21:14.923 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.923 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.923 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.181 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.181 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.181 17:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.181 17:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.181 17:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.181 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.181 { 00:21:15.181 "cntlid": 123, 00:21:15.181 "qid": 0, 00:21:15.181 "state": "enabled", 00:21:15.181 "listen_address": { 00:21:15.181 "trtype": "TCP", 00:21:15.181 "adrfam": "IPv4", 00:21:15.181 "traddr": "10.0.0.2", 00:21:15.181 "trsvcid": "4420" 00:21:15.181 }, 00:21:15.181 "peer_address": { 00:21:15.181 "trtype": "TCP", 00:21:15.181 "adrfam": "IPv4", 00:21:15.181 "traddr": "10.0.0.1", 00:21:15.181 "trsvcid": "37220" 00:21:15.181 }, 00:21:15.181 "auth": { 00:21:15.181 "state": "completed", 00:21:15.181 "digest": "sha512", 00:21:15.181 "dhgroup": "ffdhe4096" 00:21:15.181 } 00:21:15.181 } 00:21:15.181 ]' 00:21:15.181 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.439 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.439 17:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.439 17:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:15.439 17:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.439 17:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.439 17:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.439 17:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.697 17:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:21:16.630 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.630 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.630 17:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.630 17:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.630 17:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.630 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.630 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:16.630 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:16.888 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:16.888 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.888 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.888 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:16.888 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:16.888 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.888 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.888 17:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.888 17:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.888 17:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.888 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.888 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.145 00:21:17.145 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.145 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.145 17:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.403 17:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.403 17:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.403 17:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.403 17:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.403 17:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.403 17:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.403 { 00:21:17.403 "cntlid": 125, 00:21:17.403 "qid": 0, 00:21:17.403 "state": "enabled", 00:21:17.403 "listen_address": { 00:21:17.403 "trtype": "TCP", 00:21:17.403 "adrfam": "IPv4", 00:21:17.403 "traddr": "10.0.0.2", 00:21:17.403 "trsvcid": "4420" 00:21:17.403 }, 00:21:17.403 "peer_address": { 00:21:17.403 "trtype": "TCP", 00:21:17.403 "adrfam": "IPv4", 00:21:17.403 "traddr": "10.0.0.1", 00:21:17.403 "trsvcid": "37244" 00:21:17.403 }, 00:21:17.403 "auth": { 00:21:17.403 "state": "completed", 00:21:17.403 "digest": "sha512", 00:21:17.403 "dhgroup": "ffdhe4096" 00:21:17.403 } 00:21:17.403 } 00:21:17.403 ]' 00:21:17.403 17:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.403 17:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.403 17:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.661 17:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.661 17:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.661 17:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.661 17:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.661 17:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.918 17:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.850 17:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.414 00:21:19.414 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.414 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.414 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.671 { 00:21:19.671 "cntlid": 127, 00:21:19.671 "qid": 0, 00:21:19.671 "state": "enabled", 00:21:19.671 "listen_address": { 00:21:19.671 "trtype": "TCP", 00:21:19.671 "adrfam": "IPv4", 00:21:19.671 "traddr": "10.0.0.2", 00:21:19.671 "trsvcid": "4420" 00:21:19.671 }, 00:21:19.671 "peer_address": { 00:21:19.671 "trtype": "TCP", 00:21:19.671 "adrfam": "IPv4", 00:21:19.671 "traddr": "10.0.0.1", 00:21:19.671 "trsvcid": "37288" 00:21:19.671 }, 00:21:19.671 "auth": { 00:21:19.671 "state": "completed", 00:21:19.671 "digest": "sha512", 00:21:19.671 "dhgroup": "ffdhe4096" 00:21:19.671 } 00:21:19.671 } 00:21:19.671 ]' 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.671 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.928 17:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:21:20.861 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.861 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.861 17:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.861 17:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.861 17:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.861 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.861 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.861 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:20.861 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:21.118 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:21.118 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.118 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.118 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:21.118 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:21.118 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.118 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.118 17:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.118 17:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.118 17:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.118 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.118 17:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.681 00:21:21.681 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.681 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.681 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.957 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.957 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.957 17:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.957 17:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.957 17:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.957 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.957 { 00:21:21.957 "cntlid": 129, 00:21:21.957 "qid": 0, 00:21:21.957 "state": "enabled", 00:21:21.957 "listen_address": { 00:21:21.957 "trtype": "TCP", 00:21:21.957 "adrfam": "IPv4", 00:21:21.957 "traddr": "10.0.0.2", 00:21:21.957 "trsvcid": "4420" 00:21:21.957 }, 00:21:21.957 "peer_address": { 00:21:21.957 "trtype": "TCP", 00:21:21.957 "adrfam": "IPv4", 00:21:21.957 "traddr": "10.0.0.1", 00:21:21.957 "trsvcid": "44766" 00:21:21.957 }, 00:21:21.957 "auth": { 00:21:21.957 "state": "completed", 00:21:21.957 "digest": "sha512", 00:21:21.957 "dhgroup": "ffdhe6144" 00:21:21.957 } 00:21:21.957 } 00:21:21.957 ]' 00:21:21.957 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.957 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.957 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.957 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.957 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.214 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.214 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.214 17:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.471 17:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:21:23.401 17:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.401 17:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.401 17:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.401 17:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.401 17:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.401 17:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.401 17:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.401 17:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.658 17:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:23.658 17:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.658 17:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.658 17:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:23.658 17:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:23.658 17:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.658 17:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.658 17:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.658 17:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.658 17:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.658 17:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.658 17:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.223 00:21:24.223 17:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.223 17:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.223 17:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.480 17:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.480 17:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.480 17:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.480 17:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.480 17:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.480 17:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.480 { 00:21:24.480 "cntlid": 131, 00:21:24.480 "qid": 0, 00:21:24.480 "state": "enabled", 00:21:24.480 "listen_address": { 00:21:24.480 "trtype": "TCP", 00:21:24.480 "adrfam": "IPv4", 00:21:24.480 "traddr": "10.0.0.2", 00:21:24.480 "trsvcid": "4420" 00:21:24.480 }, 00:21:24.481 "peer_address": { 00:21:24.481 "trtype": "TCP", 00:21:24.481 "adrfam": "IPv4", 00:21:24.481 "traddr": "10.0.0.1", 00:21:24.481 "trsvcid": "44810" 00:21:24.481 }, 00:21:24.481 "auth": { 00:21:24.481 "state": "completed", 00:21:24.481 "digest": "sha512", 00:21:24.481 "dhgroup": "ffdhe6144" 00:21:24.481 } 00:21:24.481 } 00:21:24.481 ]' 00:21:24.481 17:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.481 17:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.481 17:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.481 17:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.481 17:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.481 17:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.481 17:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.481 17:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.738 17:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:21:25.670 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.671 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.671 17:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.671 17:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.671 17:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.671 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.671 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.671 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.928 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:25.928 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.928 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.928 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:25.928 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:25.928 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.928 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.928 17:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.928 17:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.928 17:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.928 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.928 17:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.493 00:21:26.493 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.493 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.493 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.750 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.750 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.750 17:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.750 17:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.750 17:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.750 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.750 { 00:21:26.750 "cntlid": 133, 00:21:26.750 "qid": 0, 00:21:26.750 "state": "enabled", 00:21:26.750 "listen_address": { 00:21:26.750 "trtype": "TCP", 00:21:26.750 "adrfam": "IPv4", 00:21:26.750 "traddr": "10.0.0.2", 00:21:26.750 "trsvcid": "4420" 00:21:26.750 }, 00:21:26.750 "peer_address": { 00:21:26.750 "trtype": "TCP", 00:21:26.750 "adrfam": "IPv4", 00:21:26.750 "traddr": "10.0.0.1", 00:21:26.750 "trsvcid": "44826" 00:21:26.751 }, 00:21:26.751 "auth": { 00:21:26.751 "state": "completed", 00:21:26.751 "digest": "sha512", 00:21:26.751 "dhgroup": "ffdhe6144" 00:21:26.751 } 00:21:26.751 } 00:21:26.751 ]' 00:21:26.751 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.751 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.751 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.751 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.751 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.008 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.008 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.008 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.321 17:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:21:28.253 17:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.253 17:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.253 17:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.253 17:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.253 17:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.253 17:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.253 17:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.253 17:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.510 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:28.510 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.510 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.510 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:28.510 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:28.510 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.510 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:28.510 17:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.510 17:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.510 17:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.510 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:28.510 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.074 00:21:29.074 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.074 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.074 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.363 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.363 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.363 17:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.363 17:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.363 17:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.363 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.363 { 00:21:29.363 "cntlid": 135, 00:21:29.363 "qid": 0, 00:21:29.363 "state": "enabled", 00:21:29.363 "listen_address": { 00:21:29.363 "trtype": "TCP", 00:21:29.363 "adrfam": "IPv4", 00:21:29.363 "traddr": "10.0.0.2", 00:21:29.363 "trsvcid": "4420" 00:21:29.363 }, 00:21:29.363 "peer_address": { 00:21:29.363 "trtype": "TCP", 00:21:29.363 "adrfam": "IPv4", 00:21:29.363 "traddr": "10.0.0.1", 00:21:29.363 "trsvcid": "44858" 00:21:29.363 }, 00:21:29.363 "auth": { 00:21:29.363 "state": "completed", 00:21:29.363 "digest": "sha512", 00:21:29.363 "dhgroup": "ffdhe6144" 00:21:29.363 } 00:21:29.363 } 00:21:29.363 ]' 00:21:29.363 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.363 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.363 17:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.363 17:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.363 17:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.363 17:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.363 17:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.363 17:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.621 17:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:21:30.551 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.551 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.551 17:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.551 17:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.551 17:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.551 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.551 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.551 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.551 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.808 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:30.808 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.808 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.808 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:30.808 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:30.808 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.808 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.808 17:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.808 17:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.808 17:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.808 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.808 17:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.740 00:21:31.740 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.740 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.740 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.998 { 00:21:31.998 "cntlid": 137, 00:21:31.998 "qid": 0, 00:21:31.998 "state": "enabled", 00:21:31.998 "listen_address": { 00:21:31.998 "trtype": "TCP", 00:21:31.998 "adrfam": "IPv4", 00:21:31.998 "traddr": "10.0.0.2", 00:21:31.998 "trsvcid": "4420" 00:21:31.998 }, 00:21:31.998 "peer_address": { 00:21:31.998 "trtype": "TCP", 00:21:31.998 "adrfam": "IPv4", 00:21:31.998 "traddr": "10.0.0.1", 00:21:31.998 "trsvcid": "43944" 00:21:31.998 }, 00:21:31.998 "auth": { 00:21:31.998 "state": "completed", 00:21:31.998 "digest": "sha512", 00:21:31.998 "dhgroup": "ffdhe8192" 00:21:31.998 } 00:21:31.998 } 00:21:31.998 ]' 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.998 17:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.255 17:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:21:33.208 17:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.208 17:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.208 17:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.208 17:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.465 17:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.465 17:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.465 17:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.465 17:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.722 17:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:33.722 17:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.722 17:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.722 17:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:33.722 17:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:33.722 17:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.722 17:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.722 17:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.722 17:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.722 17:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.722 17:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.722 17:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.655 00:21:34.655 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.655 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.655 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.655 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.655 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.655 17:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.655 17:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.655 17:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.655 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.655 { 00:21:34.655 "cntlid": 139, 00:21:34.655 "qid": 0, 00:21:34.655 "state": "enabled", 00:21:34.655 "listen_address": { 00:21:34.655 "trtype": "TCP", 00:21:34.655 "adrfam": "IPv4", 00:21:34.655 "traddr": "10.0.0.2", 00:21:34.655 "trsvcid": "4420" 00:21:34.655 }, 00:21:34.655 "peer_address": { 00:21:34.655 "trtype": "TCP", 00:21:34.655 "adrfam": "IPv4", 00:21:34.655 "traddr": "10.0.0.1", 00:21:34.655 "trsvcid": "43986" 00:21:34.655 }, 00:21:34.655 "auth": { 00:21:34.655 "state": "completed", 00:21:34.655 "digest": "sha512", 00:21:34.655 "dhgroup": "ffdhe8192" 00:21:34.655 } 00:21:34.655 } 00:21:34.655 ]' 00:21:34.655 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.912 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.912 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.912 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.912 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.912 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.912 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.912 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.170 17:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGU0OGFmZTk1ZDE1NjYwYTczYTUxYjJkMWVhZGRmNTdhk2fU: --dhchap-ctrl-secret DHHC-1:02:N2Y1NGFlNDUyNzgxNzQ1ZmI2NDUyNTk5NzhkYmQ1YzY3NGEwZmYyODk3YmYyMjczkq6oCg==: 00:21:36.101 17:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.101 17:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.101 17:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.101 17:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.101 17:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.101 17:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.101 17:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.101 17:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.360 17:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:36.360 17:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.360 17:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.360 17:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:36.360 17:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:36.360 17:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.360 17:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.360 17:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.360 17:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.360 17:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.360 17:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.360 17:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.293 00:21:37.293 17:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.293 17:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.293 17:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.551 { 00:21:37.551 "cntlid": 141, 00:21:37.551 "qid": 0, 00:21:37.551 "state": "enabled", 00:21:37.551 "listen_address": { 00:21:37.551 "trtype": "TCP", 00:21:37.551 "adrfam": "IPv4", 00:21:37.551 "traddr": "10.0.0.2", 00:21:37.551 "trsvcid": "4420" 00:21:37.551 }, 00:21:37.551 "peer_address": { 00:21:37.551 "trtype": "TCP", 00:21:37.551 "adrfam": "IPv4", 00:21:37.551 "traddr": "10.0.0.1", 00:21:37.551 "trsvcid": "44020" 00:21:37.551 }, 00:21:37.551 "auth": { 00:21:37.551 "state": "completed", 00:21:37.551 "digest": "sha512", 00:21:37.551 "dhgroup": "ffdhe8192" 00:21:37.551 } 00:21:37.551 } 00:21:37.551 ]' 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.551 17:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.810 17:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:Y2ZhZTM4MjU5ZGYxNzczZjk1N2ZlNmExZmZhZGJlOTUyMThmOTM0N2ZjYmY2MzA20sxFyQ==: --dhchap-ctrl-secret DHHC-1:01:YzlmNDk3NTdhYjc0ZTVkODk5M2I1M2VlNmI2OGYwNTnaS+xK: 00:21:38.743 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.743 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.743 17:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.743 17:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.743 17:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.743 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.743 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.743 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.001 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:39.001 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.001 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.001 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:39.001 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:39.001 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.001 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:39.001 17:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.001 17:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.001 17:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.001 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.001 17:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.933 00:21:39.933 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.933 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.933 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.191 { 00:21:40.191 "cntlid": 143, 00:21:40.191 "qid": 0, 00:21:40.191 "state": "enabled", 00:21:40.191 "listen_address": { 00:21:40.191 "trtype": "TCP", 00:21:40.191 "adrfam": "IPv4", 00:21:40.191 "traddr": "10.0.0.2", 00:21:40.191 "trsvcid": "4420" 00:21:40.191 }, 00:21:40.191 "peer_address": { 00:21:40.191 "trtype": "TCP", 00:21:40.191 "adrfam": "IPv4", 00:21:40.191 "traddr": "10.0.0.1", 00:21:40.191 "trsvcid": "44046" 00:21:40.191 }, 00:21:40.191 "auth": { 00:21:40.191 "state": "completed", 00:21:40.191 "digest": "sha512", 00:21:40.191 "dhgroup": "ffdhe8192" 00:21:40.191 } 00:21:40.191 } 00:21:40.191 ]' 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.191 17:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.449 17:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:21:41.382 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.382 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.382 17:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.382 17:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.382 17:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.382 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:41.382 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:41.382 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:41.382 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:41.382 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:41.382 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:41.639 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:41.639 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.639 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.639 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:41.639 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.639 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.640 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.640 17:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.640 17:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.640 17:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.640 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.640 17:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.585 00:21:42.585 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.585 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.585 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.842 { 00:21:42.842 "cntlid": 145, 00:21:42.842 "qid": 0, 00:21:42.842 "state": "enabled", 00:21:42.842 "listen_address": { 00:21:42.842 "trtype": "TCP", 00:21:42.842 "adrfam": "IPv4", 00:21:42.842 "traddr": "10.0.0.2", 00:21:42.842 "trsvcid": "4420" 00:21:42.842 }, 00:21:42.842 "peer_address": { 00:21:42.842 "trtype": "TCP", 00:21:42.842 "adrfam": "IPv4", 00:21:42.842 "traddr": "10.0.0.1", 00:21:42.842 "trsvcid": "52184" 00:21:42.842 }, 00:21:42.842 "auth": { 00:21:42.842 "state": "completed", 00:21:42.842 "digest": "sha512", 00:21:42.842 "dhgroup": "ffdhe8192" 00:21:42.842 } 00:21:42.842 } 00:21:42.842 ]' 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.842 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.099 17:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGM3ZTQ1ZWFiYTJmODk3NGQ5OGE1ZjBmN2JhMWQ3ZmIwMmI2OWJhYjdlMTlmNDEza7W9HA==: --dhchap-ctrl-secret DHHC-1:03:NjU2MDA5YzQwNzJjMDZhZmVkMTMxZDViNGExODI0ZWY5NDAyOTkyZGFkMTdjYzFjY2Q3ZDM0N2U3MTFiMDgzMRFULUk=: 00:21:44.031 17:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.031 17:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.031 17:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.031 17:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.031 17:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.031 17:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:44.031 17:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.031 17:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.288 17:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.288 17:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:44.288 17:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:44.288 17:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:44.288 17:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:44.288 17:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.288 17:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:44.288 17:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.288 17:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:44.288 17:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:44.900 request: 00:21:44.900 { 00:21:44.900 "name": "nvme0", 00:21:44.900 "trtype": "tcp", 00:21:44.900 "traddr": "10.0.0.2", 00:21:44.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.900 "adrfam": "ipv4", 00:21:44.900 "trsvcid": "4420", 00:21:44.900 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:44.900 "dhchap_key": "key2", 00:21:44.900 "method": "bdev_nvme_attach_controller", 00:21:44.900 "req_id": 1 00:21:44.900 } 00:21:44.900 Got JSON-RPC error response 00:21:44.900 response: 00:21:44.900 { 00:21:44.900 "code": -5, 00:21:44.900 "message": "Input/output error" 00:21:44.900 } 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:44.900 17:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:45.831 request: 00:21:45.831 { 00:21:45.831 "name": "nvme0", 00:21:45.831 "trtype": "tcp", 00:21:45.831 "traddr": "10.0.0.2", 00:21:45.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.831 "adrfam": "ipv4", 00:21:45.831 "trsvcid": "4420", 00:21:45.831 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:45.831 "dhchap_key": "key1", 00:21:45.831 "dhchap_ctrlr_key": "ckey2", 00:21:45.831 "method": "bdev_nvme_attach_controller", 00:21:45.831 "req_id": 1 00:21:45.831 } 00:21:45.831 Got JSON-RPC error response 00:21:45.831 response: 00:21:45.831 { 00:21:45.831 "code": -5, 00:21:45.831 "message": "Input/output error" 00:21:45.831 } 00:21:45.831 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:45.831 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:45.831 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:45.831 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:45.831 17:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.831 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.831 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.831 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.832 17:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:45.832 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.832 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.832 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.832 17:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.832 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:45.832 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.832 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:45.832 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.832 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:45.832 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.832 17:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.832 17:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.766 request: 00:21:46.766 { 00:21:46.766 "name": "nvme0", 00:21:46.766 "trtype": "tcp", 00:21:46.766 "traddr": "10.0.0.2", 00:21:46.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.766 "adrfam": "ipv4", 00:21:46.766 "trsvcid": "4420", 00:21:46.766 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:46.766 "dhchap_key": "key1", 00:21:46.766 "dhchap_ctrlr_key": "ckey1", 00:21:46.766 "method": "bdev_nvme_attach_controller", 00:21:46.766 "req_id": 1 00:21:46.766 } 00:21:46.766 Got JSON-RPC error response 00:21:46.766 response: 00:21:46.766 { 00:21:46.766 "code": -5, 00:21:46.766 "message": "Input/output error" 00:21:46.766 } 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 955383 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 955383 ']' 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 955383 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 955383 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 955383' 00:21:46.766 killing process with pid 955383 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 955383 00:21:46.766 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 955383 00:21:47.025 17:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:47.025 17:57:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.025 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:47.025 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.025 17:57:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:47.025 17:57:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=977820 00:21:47.025 17:57:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 977820 00:21:47.025 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 977820 ']' 00:21:47.025 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.025 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:47.025 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.025 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:47.025 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 977820 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 977820 ']' 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:47.283 17:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.542 17:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.475 00:21:48.475 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.475 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.475 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.733 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.733 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.733 17:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.733 17:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.733 17:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.733 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.733 { 00:21:48.733 "cntlid": 1, 00:21:48.733 "qid": 0, 00:21:48.733 "state": "enabled", 00:21:48.733 "listen_address": { 00:21:48.733 "trtype": "TCP", 00:21:48.733 "adrfam": "IPv4", 00:21:48.733 "traddr": "10.0.0.2", 00:21:48.733 "trsvcid": "4420" 00:21:48.733 }, 00:21:48.733 "peer_address": { 00:21:48.733 "trtype": "TCP", 00:21:48.733 "adrfam": "IPv4", 00:21:48.733 "traddr": "10.0.0.1", 00:21:48.733 "trsvcid": "52234" 00:21:48.733 }, 00:21:48.733 "auth": { 00:21:48.733 "state": "completed", 00:21:48.733 "digest": "sha512", 00:21:48.733 "dhgroup": "ffdhe8192" 00:21:48.733 } 00:21:48.733 } 00:21:48.733 ]' 00:21:48.733 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.733 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.733 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.733 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:48.733 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.991 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.991 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.991 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.248 17:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmUyYzA2MTAxN2Q3NjA5YzgxZTYxMmVlMTU5MjMwOWU0ZmVhNDdjYWIxMjM0ZjQzMGYxNTdjMzRmMDA1NmZjY10Hq0g=: 00:21:50.179 17:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.179 17:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.179 17:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.179 17:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.179 17:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.179 17:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:50.179 17:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.179 17:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.179 17:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.179 17:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:50.179 17:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:50.436 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:50.436 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:50.436 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:50.436 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:50.436 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.436 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:50.436 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.436 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:50.436 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:50.693 request: 00:21:50.693 { 00:21:50.693 "name": "nvme0", 00:21:50.693 "trtype": "tcp", 00:21:50.693 "traddr": "10.0.0.2", 00:21:50.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.693 "adrfam": "ipv4", 00:21:50.693 "trsvcid": "4420", 00:21:50.693 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:50.693 "dhchap_key": "key3", 00:21:50.693 "method": "bdev_nvme_attach_controller", 00:21:50.693 "req_id": 1 00:21:50.693 } 00:21:50.693 Got JSON-RPC error response 00:21:50.693 response: 00:21:50.693 { 00:21:50.693 "code": -5, 00:21:50.693 "message": "Input/output error" 00:21:50.693 } 00:21:50.693 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:50.693 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:50.693 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:50.693 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:50.693 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:50.693 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:50.693 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:50.693 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:50.950 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:50.950 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:50.950 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:50.950 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:50.950 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.950 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:50.950 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.950 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:50.950 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:51.209 request: 00:21:51.209 { 00:21:51.209 "name": "nvme0", 00:21:51.209 "trtype": "tcp", 00:21:51.209 "traddr": "10.0.0.2", 00:21:51.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:51.209 "adrfam": "ipv4", 00:21:51.209 "trsvcid": "4420", 00:21:51.209 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:51.209 "dhchap_key": "key3", 00:21:51.209 "method": "bdev_nvme_attach_controller", 00:21:51.209 "req_id": 1 00:21:51.209 } 00:21:51.209 Got JSON-RPC error response 00:21:51.209 response: 00:21:51.209 { 00:21:51.209 "code": -5, 00:21:51.209 "message": "Input/output error" 00:21:51.209 } 00:21:51.209 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:51.209 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:51.209 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:51.209 17:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:51.209 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:51.209 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:51.209 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:51.209 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:51.209 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:51.209 17:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:51.466 17:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:51.723 request: 00:21:51.723 { 00:21:51.723 "name": "nvme0", 00:21:51.723 "trtype": "tcp", 00:21:51.723 "traddr": "10.0.0.2", 00:21:51.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:51.723 "adrfam": "ipv4", 00:21:51.723 "trsvcid": "4420", 00:21:51.723 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:51.723 "dhchap_key": "key0", 00:21:51.723 "dhchap_ctrlr_key": "key1", 00:21:51.723 "method": "bdev_nvme_attach_controller", 00:21:51.723 "req_id": 1 00:21:51.723 } 00:21:51.723 Got JSON-RPC error response 00:21:51.723 response: 00:21:51.723 { 00:21:51.723 "code": -5, 00:21:51.723 "message": "Input/output error" 00:21:51.723 } 00:21:51.723 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:51.723 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:51.723 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:51.723 17:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:51.723 17:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:51.723 17:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:51.981 00:21:51.981 17:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:51.981 17:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.981 17:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:52.239 17:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.239 17:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.239 17:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.497 17:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:52.497 17:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:52.497 17:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 955503 00:21:52.497 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 955503 ']' 00:21:52.497 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 955503 00:21:52.497 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:21:52.497 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:52.497 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 955503 00:21:52.497 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:52.497 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:52.497 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 955503' 00:21:52.497 killing process with pid 955503 00:21:52.497 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 955503 00:21:52.497 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 955503 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:53.062 rmmod nvme_tcp 00:21:53.062 rmmod nvme_fabrics 00:21:53.062 rmmod nvme_keyring 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 977820 ']' 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 977820 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 977820 ']' 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 977820 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 977820 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 977820' 00:21:53.062 killing process with pid 977820 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 977820 00:21:53.062 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 977820 00:21:53.319 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:53.319 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:53.319 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:53.319 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:53.319 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:53.319 17:57:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.319 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.319 17:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.218 17:57:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:55.218 17:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.C2Q /tmp/spdk.key-sha256.98j /tmp/spdk.key-sha384.H1o /tmp/spdk.key-sha512.Dvs /tmp/spdk.key-sha512.3mc /tmp/spdk.key-sha384.7cS /tmp/spdk.key-sha256.gGr '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:55.218 00:21:55.218 real 3m7.038s 00:21:55.218 user 7m16.122s 00:21:55.218 sys 0m22.566s 00:21:55.477 17:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:55.477 17:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.477 ************************************ 00:21:55.478 END TEST nvmf_auth_target 00:21:55.478 ************************************ 00:21:55.478 17:57:30 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:55.478 17:57:30 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:55.478 17:57:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:21:55.478 17:57:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:55.478 17:57:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:55.478 ************************************ 00:21:55.478 START TEST nvmf_bdevio_no_huge 00:21:55.478 ************************************ 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:55.478 * Looking for test storage... 00:21:55.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:55.478 17:57:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:57.378 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:57.379 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:57.379 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:57.379 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:57.379 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.379 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:57.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:21:57.636 00:21:57.636 --- 10.0.0.2 ping statistics --- 00:21:57.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.636 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:21:57.636 00:21:57.636 --- 10.0.0.1 ping statistics --- 00:21:57.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.636 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=980474 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 980474 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 980474 ']' 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:57.636 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.636 [2024-07-20 17:57:32.367863] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:21:57.636 [2024-07-20 17:57:32.367955] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:57.894 [2024-07-20 17:57:32.440823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.894 [2024-07-20 17:57:32.532305] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.894 [2024-07-20 17:57:32.532369] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.894 [2024-07-20 17:57:32.532397] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.894 [2024-07-20 17:57:32.532411] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.894 [2024-07-20 17:57:32.532423] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.894 [2024-07-20 17:57:32.532519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:57.894 [2024-07-20 17:57:32.532574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:57.894 [2024-07-20 17:57:32.532632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:57.894 [2024-07-20 17:57:32.532635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.894 [2024-07-20 17:57:32.660095] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.894 Malloc0 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.894 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:58.152 [2024-07-20 17:57:32.698073] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:58.152 { 00:21:58.152 "params": { 00:21:58.152 "name": "Nvme$subsystem", 00:21:58.152 "trtype": "$TEST_TRANSPORT", 00:21:58.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.152 "adrfam": "ipv4", 00:21:58.152 "trsvcid": "$NVMF_PORT", 00:21:58.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.152 "hdgst": ${hdgst:-false}, 00:21:58.152 "ddgst": ${ddgst:-false} 00:21:58.152 }, 00:21:58.152 "method": "bdev_nvme_attach_controller" 00:21:58.152 } 00:21:58.152 EOF 00:21:58.152 )") 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:58.152 17:57:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:58.152 "params": { 00:21:58.152 "name": "Nvme1", 00:21:58.152 "trtype": "tcp", 00:21:58.152 "traddr": "10.0.0.2", 00:21:58.152 "adrfam": "ipv4", 00:21:58.152 "trsvcid": "4420", 00:21:58.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.152 "hdgst": false, 00:21:58.152 "ddgst": false 00:21:58.152 }, 00:21:58.152 "method": "bdev_nvme_attach_controller" 00:21:58.152 }' 00:21:58.152 [2024-07-20 17:57:32.746141] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:21:58.152 [2024-07-20 17:57:32.746230] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid980498 ] 00:21:58.152 [2024-07-20 17:57:32.810911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:58.152 [2024-07-20 17:57:32.898288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.152 [2024-07-20 17:57:32.898338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.152 [2024-07-20 17:57:32.898341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.717 I/O targets: 00:21:58.717 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:58.717 00:21:58.717 00:21:58.717 CUnit - A unit testing framework for C - Version 2.1-3 00:21:58.717 http://cunit.sourceforge.net/ 00:21:58.717 00:21:58.717 00:21:58.717 Suite: bdevio tests on: Nvme1n1 00:21:58.717 Test: blockdev write read block ...passed 00:21:58.717 Test: blockdev write zeroes read block ...passed 00:21:58.717 Test: blockdev write zeroes read no split ...passed 00:21:58.717 Test: blockdev write zeroes read split ...passed 00:21:58.717 Test: blockdev write zeroes read split partial ...passed 00:21:58.717 Test: blockdev reset ...[2024-07-20 17:57:33.442292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:58.717 [2024-07-20 17:57:33.442415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be92a0 (9): Bad file descriptor 00:21:58.717 [2024-07-20 17:57:33.471518] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:58.717 passed 00:21:58.977 Test: blockdev write read 8 blocks ...passed 00:21:58.977 Test: blockdev write read size > 128k ...passed 00:21:58.977 Test: blockdev write read invalid size ...passed 00:21:58.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:58.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:58.977 Test: blockdev write read max offset ...passed 00:21:58.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:58.977 Test: blockdev writev readv 8 blocks ...passed 00:21:58.977 Test: blockdev writev readv 30 x 1block ...passed 00:21:58.977 Test: blockdev writev readv block ...passed 00:21:58.977 Test: blockdev writev readv size > 128k ...passed 00:21:58.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:58.977 Test: blockdev comparev and writev ...[2024-07-20 17:57:33.738810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.977 [2024-07-20 17:57:33.738844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:58.977 [2024-07-20 17:57:33.738875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.977 [2024-07-20 17:57:33.738891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.977 [2024-07-20 17:57:33.739369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.977 [2024-07-20 17:57:33.739400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:58.977 [2024-07-20 17:57:33.739425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.977 [2024-07-20 17:57:33.739442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:58.977 [2024-07-20 17:57:33.739911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.977 [2024-07-20 17:57:33.739936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:58.977 [2024-07-20 17:57:33.739964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.977 [2024-07-20 17:57:33.739981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:58.977 [2024-07-20 17:57:33.740440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.977 [2024-07-20 17:57:33.740465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:58.977 [2024-07-20 17:57:33.740492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:58.977 [2024-07-20 17:57:33.740508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:59.235 passed 00:21:59.235 Test: blockdev nvme passthru rw ...passed 00:21:59.235 Test: blockdev nvme passthru vendor specific ...[2024-07-20 17:57:33.824353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.235 [2024-07-20 17:57:33.824384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:59.235 [2024-07-20 17:57:33.824701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.235 [2024-07-20 17:57:33.824725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:59.235 [2024-07-20 17:57:33.825042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.235 [2024-07-20 17:57:33.825065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:59.235 [2024-07-20 17:57:33.825378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.235 [2024-07-20 17:57:33.825402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:59.235 passed 00:21:59.235 Test: blockdev nvme admin passthru ...passed 00:21:59.235 Test: blockdev copy ...passed 00:21:59.235 00:21:59.235 Run Summary: Type Total Ran Passed Failed Inactive 00:21:59.235 suites 1 1 n/a 0 0 00:21:59.235 tests 23 23 23 0 0 00:21:59.235 asserts 152 152 152 0 n/a 00:21:59.235 00:21:59.235 Elapsed time = 1.340 seconds 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:59.492 rmmod nvme_tcp 00:21:59.492 rmmod nvme_fabrics 00:21:59.492 rmmod nvme_keyring 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 980474 ']' 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 980474 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 980474 ']' 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 980474 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:59.492 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 980474 00:21:59.749 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:21:59.749 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:21:59.749 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 980474' 00:21:59.749 killing process with pid 980474 00:21:59.749 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 980474 00:21:59.749 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 980474 00:22:00.045 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:00.045 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:00.045 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:00.045 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.045 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:00.045 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.045 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.045 17:57:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.944 17:57:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:01.944 00:22:01.944 real 0m6.670s 00:22:01.944 user 0m11.526s 00:22:01.944 sys 0m2.588s 00:22:01.944 17:57:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:01.944 17:57:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:01.944 ************************************ 00:22:01.944 END TEST nvmf_bdevio_no_huge 00:22:01.944 ************************************ 00:22:02.202 17:57:36 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:02.202 17:57:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:02.202 17:57:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:02.202 17:57:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:02.202 ************************************ 00:22:02.202 START TEST nvmf_tls 00:22:02.202 ************************************ 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:02.202 * Looking for test storage... 00:22:02.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:02.202 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:02.203 17:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:04.107 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:04.107 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:04.107 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:04.107 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:04.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:22:04.107 00:22:04.107 --- 10.0.0.2 ping statistics --- 00:22:04.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.107 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:22:04.107 00:22:04.107 --- 10.0.0.1 ping statistics --- 00:22:04.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.107 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=982684 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 982684 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 982684 ']' 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:04.107 17:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.108 17:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:04.108 17:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.365 [2024-07-20 17:57:38.905057] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:04.365 [2024-07-20 17:57:38.905143] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.365 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.365 [2024-07-20 17:57:38.969252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.365 [2024-07-20 17:57:39.053512] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.365 [2024-07-20 17:57:39.053564] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.365 [2024-07-20 17:57:39.053608] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.365 [2024-07-20 17:57:39.053621] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.365 [2024-07-20 17:57:39.053631] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.365 [2024-07-20 17:57:39.053657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.365 17:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:04.365 17:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:04.365 17:57:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.365 17:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.365 17:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.365 17:57:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.365 17:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:04.365 17:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:04.623 true 00:22:04.623 17:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:04.623 17:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:04.880 17:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:04.881 17:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:04.881 17:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:05.139 17:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:05.139 17:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:05.398 17:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:05.398 17:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:05.398 17:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:05.655 17:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:05.655 17:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:05.913 17:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:05.913 17:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:05.913 17:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:05.913 17:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:06.172 17:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:06.172 17:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:06.172 17:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:06.429 17:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:06.429 17:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:06.688 17:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:06.688 17:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:06.688 17:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:06.945 17:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:06.945 17:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:07.204 17:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:07.461 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:07.461 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:07.461 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.xYJrmUvYCz 00:22:07.461 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:07.461 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.aREz7rOQ4a 00:22:07.461 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:07.461 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:07.461 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.xYJrmUvYCz 00:22:07.461 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.aREz7rOQ4a 00:22:07.461 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:07.719 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:07.977 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.xYJrmUvYCz 00:22:07.977 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xYJrmUvYCz 00:22:07.977 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:08.235 [2024-07-20 17:57:42.923778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.235 17:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:08.492 17:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:08.749 [2024-07-20 17:57:43.465255] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:08.749 [2024-07-20 17:57:43.465523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.749 17:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:09.007 malloc0 00:22:09.007 17:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:09.264 17:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xYJrmUvYCz 00:22:09.522 [2024-07-20 17:57:44.254701] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:09.522 17:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xYJrmUvYCz 00:22:09.522 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.715 Initializing NVMe Controllers 00:22:21.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:21.715 Initialization complete. Launching workers. 00:22:21.715 ======================================================== 00:22:21.715 Latency(us) 00:22:21.715 Device Information : IOPS MiB/s Average min max 00:22:21.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7706.25 30.10 8307.34 1192.61 9657.88 00:22:21.715 ======================================================== 00:22:21.715 Total : 7706.25 30.10 8307.34 1192.61 9657.88 00:22:21.715 00:22:21.715 17:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xYJrmUvYCz 00:22:21.715 17:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:21.715 17:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:21.715 17:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:21.715 17:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xYJrmUvYCz' 00:22:21.715 17:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:21.715 17:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=984458 00:22:21.715 17:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:21.715 17:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 984458 /var/tmp/bdevperf.sock 00:22:21.715 17:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:21.716 17:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 984458 ']' 00:22:21.716 17:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.716 17:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:21.716 17:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.716 17:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:21.716 17:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.716 [2024-07-20 17:57:54.426213] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:21.716 [2024-07-20 17:57:54.426303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984458 ] 00:22:21.716 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.716 [2024-07-20 17:57:54.488931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.716 [2024-07-20 17:57:54.576369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.716 17:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:21.716 17:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:21.716 17:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xYJrmUvYCz 00:22:21.716 [2024-07-20 17:57:54.955182] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.716 [2024-07-20 17:57:54.955299] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:21.716 TLSTESTn1 00:22:21.716 17:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:21.716 Running I/O for 10 seconds... 00:22:31.708 00:22:31.708 Latency(us) 00:22:31.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.708 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:31.708 Verification LBA range: start 0x0 length 0x2000 00:22:31.708 TLSTESTn1 : 10.15 608.93 2.38 0.00 0.00 209028.97 7039.05 288940.94 00:22:31.708 =================================================================================================================== 00:22:31.708 Total : 608.93 2.38 0.00 0.00 209028.97 7039.05 288940.94 00:22:31.708 0 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 984458 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 984458 ']' 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 984458 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 984458 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 984458' 00:22:31.708 killing process with pid 984458 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 984458 00:22:31.708 Received shutdown signal, test time was about 10.000000 seconds 00:22:31.708 00:22:31.708 Latency(us) 00:22:31.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.708 =================================================================================================================== 00:22:31.708 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.708 [2024-07-20 17:58:05.371458] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 984458 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aREz7rOQ4a 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aREz7rOQ4a 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aREz7rOQ4a 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aREz7rOQ4a' 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=985769 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:31.708 17:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:31.709 17:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 985769 /var/tmp/bdevperf.sock 00:22:31.709 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 985769 ']' 00:22:31.709 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.709 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:31.709 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.709 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:31.709 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.709 [2024-07-20 17:58:05.609597] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:31.709 [2024-07-20 17:58:05.609690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985769 ] 00:22:31.709 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.709 [2024-07-20 17:58:05.668744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.709 [2024-07-20 17:58:05.760781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.709 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:31.709 17:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:31.709 17:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aREz7rOQ4a 00:22:31.709 [2024-07-20 17:58:06.091917] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.709 [2024-07-20 17:58:06.092045] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:31.709 [2024-07-20 17:58:06.099502] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:31.709 [2024-07-20 17:58:06.099902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b4840 (107): Transport endpoint is not connected 00:22:31.709 [2024-07-20 17:58:06.100892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b4840 (9): Bad file descriptor 00:22:31.709 [2024-07-20 17:58:06.101890] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:31.709 [2024-07-20 17:58:06.101912] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:31.709 [2024-07-20 17:58:06.101929] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:31.709 request: 00:22:31.709 { 00:22:31.709 "name": "TLSTEST", 00:22:31.709 "trtype": "tcp", 00:22:31.709 "traddr": "10.0.0.2", 00:22:31.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.709 "adrfam": "ipv4", 00:22:31.709 "trsvcid": "4420", 00:22:31.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.709 "psk": "/tmp/tmp.aREz7rOQ4a", 00:22:31.709 "method": "bdev_nvme_attach_controller", 00:22:31.709 "req_id": 1 00:22:31.709 } 00:22:31.709 Got JSON-RPC error response 00:22:31.709 response: 00:22:31.709 { 00:22:31.709 "code": -5, 00:22:31.709 "message": "Input/output error" 00:22:31.709 } 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 985769 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 985769 ']' 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 985769 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 985769 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 985769' 00:22:31.709 killing process with pid 985769 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 985769 00:22:31.709 Received shutdown signal, test time was about 10.000000 seconds 00:22:31.709 00:22:31.709 Latency(us) 00:22:31.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.709 =================================================================================================================== 00:22:31.709 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:31.709 [2024-07-20 17:58:06.152105] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 985769 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xYJrmUvYCz 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xYJrmUvYCz 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xYJrmUvYCz 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xYJrmUvYCz' 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=985904 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 985904 /var/tmp/bdevperf.sock 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 985904 ']' 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:31.709 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.709 [2024-07-20 17:58:06.409745] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:31.709 [2024-07-20 17:58:06.409854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985904 ] 00:22:31.709 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.709 [2024-07-20 17:58:06.469495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.967 [2024-07-20 17:58:06.563457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.967 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:31.967 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:31.967 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.xYJrmUvYCz 00:22:32.225 [2024-07-20 17:58:06.946289] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.225 [2024-07-20 17:58:06.946404] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:32.225 [2024-07-20 17:58:06.951850] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:32.225 [2024-07-20 17:58:06.951891] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:32.225 [2024-07-20 17:58:06.952072] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:32.225 [2024-07-20 17:58:06.952375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1033840 (107): Transport endpoint is not connected 00:22:32.225 [2024-07-20 17:58:06.953362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1033840 (9): Bad file descriptor 00:22:32.225 [2024-07-20 17:58:06.954359] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:32.225 [2024-07-20 17:58:06.954382] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:32.225 [2024-07-20 17:58:06.954399] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:32.225 request: 00:22:32.225 { 00:22:32.225 "name": "TLSTEST", 00:22:32.225 "trtype": "tcp", 00:22:32.225 "traddr": "10.0.0.2", 00:22:32.225 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:32.225 "adrfam": "ipv4", 00:22:32.225 "trsvcid": "4420", 00:22:32.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.225 "psk": "/tmp/tmp.xYJrmUvYCz", 00:22:32.225 "method": "bdev_nvme_attach_controller", 00:22:32.225 "req_id": 1 00:22:32.225 } 00:22:32.225 Got JSON-RPC error response 00:22:32.225 response: 00:22:32.225 { 00:22:32.225 "code": -5, 00:22:32.225 "message": "Input/output error" 00:22:32.225 } 00:22:32.225 17:58:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 985904 00:22:32.225 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 985904 ']' 00:22:32.225 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 985904 00:22:32.225 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:32.225 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:32.225 17:58:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 985904 00:22:32.225 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:32.225 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:32.225 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 985904' 00:22:32.225 killing process with pid 985904 00:22:32.225 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 985904 00:22:32.225 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.225 00:22:32.225 Latency(us) 00:22:32.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.225 =================================================================================================================== 00:22:32.225 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:32.225 [2024-07-20 17:58:07.004027] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:32.225 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 985904 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xYJrmUvYCz 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xYJrmUvYCz 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xYJrmUvYCz 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xYJrmUvYCz' 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=986043 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 986043 /var/tmp/bdevperf.sock 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 986043 ']' 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:32.483 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.483 [2024-07-20 17:58:07.266062] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:32.483 [2024-07-20 17:58:07.266177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986043 ] 00:22:32.742 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.742 [2024-07-20 17:58:07.324935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.742 [2024-07-20 17:58:07.410914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.742 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:32.742 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:32.742 17:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xYJrmUvYCz 00:22:33.000 [2024-07-20 17:58:07.731210] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:33.000 [2024-07-20 17:58:07.731320] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:33.000 [2024-07-20 17:58:07.736473] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:33.000 [2024-07-20 17:58:07.736505] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:33.000 [2024-07-20 17:58:07.736570] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:33.000 [2024-07-20 17:58:07.736738] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:33.000 [2024-07-20 17:58:07.737134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x608840 (9): Bad file descriptor 00:22:33.000 [2024-07-20 17:58:07.738130] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:33.000 [2024-07-20 17:58:07.738172] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:33.000 [2024-07-20 17:58:07.738188] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:33.000 request: 00:22:33.000 { 00:22:33.000 "name": "TLSTEST", 00:22:33.000 "trtype": "tcp", 00:22:33.000 "traddr": "10.0.0.2", 00:22:33.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.000 "adrfam": "ipv4", 00:22:33.000 "trsvcid": "4420", 00:22:33.000 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:33.000 "psk": "/tmp/tmp.xYJrmUvYCz", 00:22:33.000 "method": "bdev_nvme_attach_controller", 00:22:33.000 "req_id": 1 00:22:33.000 } 00:22:33.000 Got JSON-RPC error response 00:22:33.000 response: 00:22:33.000 { 00:22:33.000 "code": -5, 00:22:33.000 "message": "Input/output error" 00:22:33.000 } 00:22:33.000 17:58:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 986043 00:22:33.000 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 986043 ']' 00:22:33.000 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 986043 00:22:33.000 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:33.000 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:33.000 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 986043 00:22:33.000 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:33.000 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:33.000 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 986043' 00:22:33.000 killing process with pid 986043 00:22:33.000 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 986043 00:22:33.000 Received shutdown signal, test time was about 10.000000 seconds 00:22:33.000 00:22:33.000 Latency(us) 00:22:33.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.000 =================================================================================================================== 00:22:33.000 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:33.000 [2024-07-20 17:58:07.785912] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:33.000 17:58:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 986043 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=986064 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 986064 /var/tmp/bdevperf.sock 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 986064 ']' 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:33.258 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.258 [2024-07-20 17:58:08.051016] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:33.258 [2024-07-20 17:58:08.051105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986064 ] 00:22:33.516 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.516 [2024-07-20 17:58:08.111498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.516 [2024-07-20 17:58:08.195168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.516 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:33.516 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:33.516 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:33.774 [2024-07-20 17:58:08.556492] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:33.774 [2024-07-20 17:58:08.558396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1419f10 (9): Bad file descriptor 00:22:33.774 [2024-07-20 17:58:08.559404] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:33.774 [2024-07-20 17:58:08.559425] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:33.774 [2024-07-20 17:58:08.559440] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:33.774 request: 00:22:33.774 { 00:22:33.774 "name": "TLSTEST", 00:22:33.774 "trtype": "tcp", 00:22:33.774 "traddr": "10.0.0.2", 00:22:33.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.774 "adrfam": "ipv4", 00:22:33.774 "trsvcid": "4420", 00:22:33.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.774 "method": "bdev_nvme_attach_controller", 00:22:33.774 "req_id": 1 00:22:33.774 } 00:22:33.774 Got JSON-RPC error response 00:22:33.774 response: 00:22:33.774 { 00:22:33.774 "code": -5, 00:22:33.774 "message": "Input/output error" 00:22:33.774 } 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 986064 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 986064 ']' 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 986064 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 986064 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 986064' 00:22:34.032 killing process with pid 986064 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 986064 00:22:34.032 Received shutdown signal, test time was about 10.000000 seconds 00:22:34.032 00:22:34.032 Latency(us) 00:22:34.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.032 =================================================================================================================== 00:22:34.032 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 986064 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 982684 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 982684 ']' 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 982684 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 982684 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 982684' 00:22:34.032 killing process with pid 982684 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 982684 00:22:34.032 [2024-07-20 17:58:08.822568] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:34.032 17:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 982684 00:22:34.290 17:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:34.290 17:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:34.290 17:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:34.290 17:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:34.290 17:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:34.290 17:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:34.290 17:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.cnXFtFLDbA 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.cnXFtFLDbA 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=986213 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 986213 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 986213 ']' 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:34.548 17:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.548 [2024-07-20 17:58:09.146828] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:34.548 [2024-07-20 17:58:09.146926] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.548 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.548 [2024-07-20 17:58:09.211974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.548 [2024-07-20 17:58:09.297007] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.548 [2024-07-20 17:58:09.297064] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.548 [2024-07-20 17:58:09.297092] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.548 [2024-07-20 17:58:09.297102] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.548 [2024-07-20 17:58:09.297112] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.548 [2024-07-20 17:58:09.297146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.806 17:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:34.806 17:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:34.806 17:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.806 17:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.806 17:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.806 17:58:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.806 17:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.cnXFtFLDbA 00:22:34.806 17:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cnXFtFLDbA 00:22:34.807 17:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:35.064 [2024-07-20 17:58:09.704836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.064 17:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:35.321 17:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:35.579 [2024-07-20 17:58:10.210267] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:35.579 [2024-07-20 17:58:10.210538] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.579 17:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:35.837 malloc0 00:22:35.837 17:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:36.095 17:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cnXFtFLDbA 00:22:36.353 [2024-07-20 17:58:10.967870] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cnXFtFLDbA 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cnXFtFLDbA' 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=986497 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 986497 /var/tmp/bdevperf.sock 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 986497 ']' 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:36.353 17:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.353 [2024-07-20 17:58:11.028273] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:36.353 [2024-07-20 17:58:11.028365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986497 ] 00:22:36.353 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.354 [2024-07-20 17:58:11.087649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.612 [2024-07-20 17:58:11.172807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.612 17:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:36.612 17:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:36.612 17:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cnXFtFLDbA 00:22:36.869 [2024-07-20 17:58:11.509421] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.869 [2024-07-20 17:58:11.509525] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:36.869 TLSTESTn1 00:22:36.869 17:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:37.127 Running I/O for 10 seconds... 00:22:47.086 00:22:47.086 Latency(us) 00:22:47.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.086 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:47.086 Verification LBA range: start 0x0 length 0x2000 00:22:47.086 TLSTESTn1 : 10.13 739.09 2.89 0.00 0.00 172287.04 6359.42 284280.60 00:22:47.086 =================================================================================================================== 00:22:47.086 Total : 739.09 2.89 0.00 0.00 172287.04 6359.42 284280.60 00:22:47.086 0 00:22:47.086 17:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.086 17:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 986497 00:22:47.086 17:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 986497 ']' 00:22:47.086 17:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 986497 00:22:47.086 17:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:47.086 17:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:47.086 17:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 986497 00:22:47.345 17:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:47.345 17:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:47.345 17:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 986497' 00:22:47.345 killing process with pid 986497 00:22:47.345 17:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 986497 00:22:47.345 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.345 00:22:47.345 Latency(us) 00:22:47.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.345 =================================================================================================================== 00:22:47.345 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.345 [2024-07-20 17:58:21.897388] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:47.345 17:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 986497 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.cnXFtFLDbA 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cnXFtFLDbA 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cnXFtFLDbA 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cnXFtFLDbA 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cnXFtFLDbA' 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=987809 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 987809 /var/tmp/bdevperf.sock 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 987809 ']' 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:47.345 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.604 [2024-07-20 17:58:22.172935] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:47.604 [2024-07-20 17:58:22.173026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid987809 ] 00:22:47.604 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.604 [2024-07-20 17:58:22.230198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.604 [2024-07-20 17:58:22.310146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.862 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:47.862 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:47.862 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cnXFtFLDbA 00:22:47.862 [2024-07-20 17:58:22.649433] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.862 [2024-07-20 17:58:22.649511] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:47.862 [2024-07-20 17:58:22.649525] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.cnXFtFLDbA 00:22:47.862 request: 00:22:47.862 { 00:22:47.862 "name": "TLSTEST", 00:22:47.862 "trtype": "tcp", 00:22:47.862 "traddr": "10.0.0.2", 00:22:47.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:47.862 "adrfam": "ipv4", 00:22:47.862 "trsvcid": "4420", 00:22:47.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.862 "psk": "/tmp/tmp.cnXFtFLDbA", 00:22:47.862 "method": "bdev_nvme_attach_controller", 00:22:47.862 "req_id": 1 00:22:47.862 } 00:22:47.862 Got JSON-RPC error response 00:22:47.862 response: 00:22:47.862 { 00:22:47.862 "code": -1, 00:22:47.862 "message": "Operation not permitted" 00:22:47.862 } 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 987809 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 987809 ']' 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 987809 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 987809 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 987809' 00:22:48.120 killing process with pid 987809 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 987809 00:22:48.120 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.120 00:22:48.120 Latency(us) 00:22:48.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.120 =================================================================================================================== 00:22:48.120 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 987809 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 986213 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 986213 ']' 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 986213 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:48.120 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:48.378 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 986213 00:22:48.378 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:48.378 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:48.378 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 986213' 00:22:48.378 killing process with pid 986213 00:22:48.378 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 986213 00:22:48.378 [2024-07-20 17:58:22.937460] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:48.378 17:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 986213 00:22:48.378 17:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:48.378 17:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:48.378 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:48.378 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.378 17:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=987953 00:22:48.378 17:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:48.378 17:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 987953 00:22:48.378 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 987953 ']' 00:22:48.378 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.378 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:48.378 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.378 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:48.378 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.636 [2024-07-20 17:58:23.198613] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:48.636 [2024-07-20 17:58:23.198692] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.636 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.636 [2024-07-20 17:58:23.261899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.636 [2024-07-20 17:58:23.345318] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.636 [2024-07-20 17:58:23.345388] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.636 [2024-07-20 17:58:23.345401] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.636 [2024-07-20 17:58:23.345412] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.636 [2024-07-20 17:58:23.345421] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.636 [2024-07-20 17:58:23.345448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.cnXFtFLDbA 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.cnXFtFLDbA 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.cnXFtFLDbA 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cnXFtFLDbA 00:22:48.894 17:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:49.152 [2024-07-20 17:58:23.696737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.152 17:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:49.411 17:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:49.411 [2024-07-20 17:58:24.182051] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:49.411 [2024-07-20 17:58:24.182338] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.411 17:58:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:49.668 malloc0 00:22:49.668 17:58:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:49.925 17:58:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cnXFtFLDbA 00:22:50.182 [2024-07-20 17:58:24.947475] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:50.182 [2024-07-20 17:58:24.947517] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:50.182 [2024-07-20 17:58:24.947554] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:50.182 request: 00:22:50.182 { 00:22:50.182 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.182 "host": "nqn.2016-06.io.spdk:host1", 00:22:50.182 "psk": "/tmp/tmp.cnXFtFLDbA", 00:22:50.182 "method": "nvmf_subsystem_add_host", 00:22:50.182 "req_id": 1 00:22:50.182 } 00:22:50.182 Got JSON-RPC error response 00:22:50.182 response: 00:22:50.182 { 00:22:50.182 "code": -32603, 00:22:50.182 "message": "Internal error" 00:22:50.182 } 00:22:50.182 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:50.182 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:50.182 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:50.182 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:50.182 17:58:24 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 987953 00:22:50.182 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 987953 ']' 00:22:50.182 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 987953 00:22:50.182 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:50.182 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:50.182 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 987953 00:22:50.440 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:50.440 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:50.440 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 987953' 00:22:50.440 killing process with pid 987953 00:22:50.440 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 987953 00:22:50.440 17:58:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 987953 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.cnXFtFLDbA 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=988215 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 988215 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 988215 ']' 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:50.440 17:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.699 [2024-07-20 17:58:25.259502] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:50.699 [2024-07-20 17:58:25.259577] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.699 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.699 [2024-07-20 17:58:25.326407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.699 [2024-07-20 17:58:25.417038] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.699 [2024-07-20 17:58:25.417102] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.699 [2024-07-20 17:58:25.417128] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.699 [2024-07-20 17:58:25.417152] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.699 [2024-07-20 17:58:25.417164] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.699 [2024-07-20 17:58:25.417196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.957 17:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:50.957 17:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:50.957 17:58:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.957 17:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.957 17:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.957 17:58:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.957 17:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.cnXFtFLDbA 00:22:50.957 17:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cnXFtFLDbA 00:22:50.957 17:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:51.243 [2024-07-20 17:58:25.836513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.243 17:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:51.500 17:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:51.758 [2024-07-20 17:58:26.378000] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:51.758 [2024-07-20 17:58:26.378268] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.758 17:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:52.016 malloc0 00:22:52.016 17:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:52.274 17:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cnXFtFLDbA 00:22:52.560 [2024-07-20 17:58:27.182430] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:52.560 17:58:27 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=988416 00:22:52.560 17:58:27 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.560 17:58:27 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.560 17:58:27 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 988416 /var/tmp/bdevperf.sock 00:22:52.560 17:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 988416 ']' 00:22:52.560 17:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.560 17:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:52.560 17:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.560 17:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:52.560 17:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.560 [2024-07-20 17:58:27.242707] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:52.560 [2024-07-20 17:58:27.242803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988416 ] 00:22:52.560 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.560 [2024-07-20 17:58:27.300116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.818 [2024-07-20 17:58:27.385429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.818 17:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:52.818 17:58:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:52.818 17:58:27 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cnXFtFLDbA 00:22:53.076 [2024-07-20 17:58:27.709624] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.076 [2024-07-20 17:58:27.709742] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.076 TLSTESTn1 00:22:53.076 17:58:27 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:53.641 17:58:28 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:53.641 "subsystems": [ 00:22:53.641 { 00:22:53.641 "subsystem": "keyring", 00:22:53.641 "config": [] 00:22:53.641 }, 00:22:53.641 { 00:22:53.641 "subsystem": "iobuf", 00:22:53.641 "config": [ 00:22:53.641 { 00:22:53.641 "method": "iobuf_set_options", 00:22:53.641 "params": { 00:22:53.641 "small_pool_count": 8192, 00:22:53.641 "large_pool_count": 1024, 00:22:53.641 "small_bufsize": 8192, 00:22:53.641 "large_bufsize": 135168 00:22:53.641 } 00:22:53.641 } 00:22:53.641 ] 00:22:53.641 }, 00:22:53.641 { 00:22:53.641 "subsystem": "sock", 00:22:53.641 "config": [ 00:22:53.641 { 00:22:53.641 "method": "sock_set_default_impl", 00:22:53.641 "params": { 00:22:53.641 "impl_name": "posix" 00:22:53.641 } 00:22:53.641 }, 00:22:53.641 { 00:22:53.641 "method": "sock_impl_set_options", 00:22:53.641 "params": { 00:22:53.641 "impl_name": "ssl", 00:22:53.641 "recv_buf_size": 4096, 00:22:53.641 "send_buf_size": 4096, 00:22:53.641 "enable_recv_pipe": true, 00:22:53.641 "enable_quickack": false, 00:22:53.641 "enable_placement_id": 0, 00:22:53.641 "enable_zerocopy_send_server": true, 00:22:53.641 "enable_zerocopy_send_client": false, 00:22:53.641 "zerocopy_threshold": 0, 00:22:53.641 "tls_version": 0, 00:22:53.641 "enable_ktls": false 00:22:53.641 } 00:22:53.641 }, 00:22:53.641 { 00:22:53.641 "method": "sock_impl_set_options", 00:22:53.641 "params": { 00:22:53.641 "impl_name": "posix", 00:22:53.641 "recv_buf_size": 2097152, 00:22:53.641 "send_buf_size": 2097152, 00:22:53.641 "enable_recv_pipe": true, 00:22:53.641 "enable_quickack": false, 00:22:53.641 "enable_placement_id": 0, 00:22:53.641 "enable_zerocopy_send_server": true, 00:22:53.641 "enable_zerocopy_send_client": false, 00:22:53.641 "zerocopy_threshold": 0, 00:22:53.641 "tls_version": 0, 00:22:53.641 "enable_ktls": false 00:22:53.641 } 00:22:53.641 } 00:22:53.641 ] 00:22:53.641 }, 00:22:53.641 { 00:22:53.641 "subsystem": "vmd", 00:22:53.641 "config": [] 00:22:53.641 }, 00:22:53.641 { 00:22:53.641 "subsystem": "accel", 00:22:53.641 "config": [ 00:22:53.641 { 00:22:53.641 "method": "accel_set_options", 00:22:53.641 "params": { 00:22:53.641 "small_cache_size": 128, 00:22:53.641 "large_cache_size": 16, 00:22:53.641 "task_count": 2048, 00:22:53.641 "sequence_count": 2048, 00:22:53.641 "buf_count": 2048 00:22:53.641 } 00:22:53.641 } 00:22:53.641 ] 00:22:53.641 }, 00:22:53.641 { 00:22:53.641 "subsystem": "bdev", 00:22:53.641 "config": [ 00:22:53.641 { 00:22:53.641 "method": "bdev_set_options", 00:22:53.641 "params": { 00:22:53.641 "bdev_io_pool_size": 65535, 00:22:53.641 "bdev_io_cache_size": 256, 00:22:53.641 "bdev_auto_examine": true, 00:22:53.641 "iobuf_small_cache_size": 128, 00:22:53.641 "iobuf_large_cache_size": 16 00:22:53.641 } 00:22:53.641 }, 00:22:53.641 { 00:22:53.641 "method": "bdev_raid_set_options", 00:22:53.641 "params": { 00:22:53.641 "process_window_size_kb": 1024 00:22:53.641 } 00:22:53.641 }, 00:22:53.641 { 00:22:53.641 "method": "bdev_iscsi_set_options", 00:22:53.641 "params": { 00:22:53.641 "timeout_sec": 30 00:22:53.641 } 00:22:53.641 }, 00:22:53.641 { 00:22:53.641 "method": "bdev_nvme_set_options", 00:22:53.641 "params": { 00:22:53.641 "action_on_timeout": "none", 00:22:53.641 "timeout_us": 0, 00:22:53.641 "timeout_admin_us": 0, 00:22:53.641 "keep_alive_timeout_ms": 10000, 00:22:53.641 "arbitration_burst": 0, 00:22:53.641 "low_priority_weight": 0, 00:22:53.641 "medium_priority_weight": 0, 00:22:53.641 "high_priority_weight": 0, 00:22:53.641 "nvme_adminq_poll_period_us": 10000, 00:22:53.641 "nvme_ioq_poll_period_us": 0, 00:22:53.641 "io_queue_requests": 0, 00:22:53.641 "delay_cmd_submit": true, 00:22:53.641 "transport_retry_count": 4, 00:22:53.641 "bdev_retry_count": 3, 00:22:53.641 "transport_ack_timeout": 0, 00:22:53.641 "ctrlr_loss_timeout_sec": 0, 00:22:53.641 "reconnect_delay_sec": 0, 00:22:53.641 "fast_io_fail_timeout_sec": 0, 00:22:53.641 "disable_auto_failback": false, 00:22:53.641 "generate_uuids": false, 00:22:53.641 "transport_tos": 0, 00:22:53.642 "nvme_error_stat": false, 00:22:53.642 "rdma_srq_size": 0, 00:22:53.642 "io_path_stat": false, 00:22:53.642 "allow_accel_sequence": false, 00:22:53.642 "rdma_max_cq_size": 0, 00:22:53.642 "rdma_cm_event_timeout_ms": 0, 00:22:53.642 "dhchap_digests": [ 00:22:53.642 "sha256", 00:22:53.642 "sha384", 00:22:53.642 "sha512" 00:22:53.642 ], 00:22:53.642 "dhchap_dhgroups": [ 00:22:53.642 "null", 00:22:53.642 "ffdhe2048", 00:22:53.642 "ffdhe3072", 00:22:53.642 "ffdhe4096", 00:22:53.642 "ffdhe6144", 00:22:53.642 "ffdhe8192" 00:22:53.642 ] 00:22:53.642 } 00:22:53.642 }, 00:22:53.642 { 00:22:53.642 "method": "bdev_nvme_set_hotplug", 00:22:53.642 "params": { 00:22:53.642 "period_us": 100000, 00:22:53.642 "enable": false 00:22:53.642 } 00:22:53.642 }, 00:22:53.642 { 00:22:53.642 "method": "bdev_malloc_create", 00:22:53.642 "params": { 00:22:53.642 "name": "malloc0", 00:22:53.642 "num_blocks": 8192, 00:22:53.642 "block_size": 4096, 00:22:53.642 "physical_block_size": 4096, 00:22:53.642 "uuid": "4de2d862-2ddc-47db-8121-24eed2bb0832", 00:22:53.642 "optimal_io_boundary": 0 00:22:53.642 } 00:22:53.642 }, 00:22:53.642 { 00:22:53.642 "method": "bdev_wait_for_examine" 00:22:53.642 } 00:22:53.642 ] 00:22:53.642 }, 00:22:53.642 { 00:22:53.642 "subsystem": "nbd", 00:22:53.642 "config": [] 00:22:53.642 }, 00:22:53.642 { 00:22:53.642 "subsystem": "scheduler", 00:22:53.642 "config": [ 00:22:53.642 { 00:22:53.642 "method": "framework_set_scheduler", 00:22:53.642 "params": { 00:22:53.642 "name": "static" 00:22:53.642 } 00:22:53.642 } 00:22:53.642 ] 00:22:53.642 }, 00:22:53.642 { 00:22:53.642 "subsystem": "nvmf", 00:22:53.642 "config": [ 00:22:53.642 { 00:22:53.642 "method": "nvmf_set_config", 00:22:53.642 "params": { 00:22:53.642 "discovery_filter": "match_any", 00:22:53.642 "admin_cmd_passthru": { 00:22:53.642 "identify_ctrlr": false 00:22:53.642 } 00:22:53.642 } 00:22:53.642 }, 00:22:53.642 { 00:22:53.642 "method": "nvmf_set_max_subsystems", 00:22:53.642 "params": { 00:22:53.642 "max_subsystems": 1024 00:22:53.642 } 00:22:53.642 }, 00:22:53.642 { 00:22:53.642 "method": "nvmf_set_crdt", 00:22:53.642 "params": { 00:22:53.642 "crdt1": 0, 00:22:53.642 "crdt2": 0, 00:22:53.642 "crdt3": 0 00:22:53.642 } 00:22:53.642 }, 00:22:53.642 { 00:22:53.642 "method": "nvmf_create_transport", 00:22:53.642 "params": { 00:22:53.642 "trtype": "TCP", 00:22:53.642 "max_queue_depth": 128, 00:22:53.642 "max_io_qpairs_per_ctrlr": 127, 00:22:53.642 "in_capsule_data_size": 4096, 00:22:53.642 "max_io_size": 131072, 00:22:53.642 "io_unit_size": 131072, 00:22:53.642 "max_aq_depth": 128, 00:22:53.642 "num_shared_buffers": 511, 00:22:53.642 "buf_cache_size": 4294967295, 00:22:53.642 "dif_insert_or_strip": false, 00:22:53.642 "zcopy": false, 00:22:53.642 "c2h_success": false, 00:22:53.642 "sock_priority": 0, 00:22:53.642 "abort_timeout_sec": 1, 00:22:53.642 "ack_timeout": 0, 00:22:53.642 "data_wr_pool_size": 0 00:22:53.642 } 00:22:53.642 }, 00:22:53.642 { 00:22:53.642 "method": "nvmf_create_subsystem", 00:22:53.642 "params": { 00:22:53.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.642 "allow_any_host": false, 00:22:53.642 "serial_number": "SPDK00000000000001", 00:22:53.642 "model_number": "SPDK bdev Controller", 00:22:53.642 "max_namespaces": 10, 00:22:53.642 "min_cntlid": 1, 00:22:53.642 "max_cntlid": 65519, 00:22:53.642 "ana_reporting": false 00:22:53.642 } 00:22:53.642 }, 00:22:53.642 { 00:22:53.642 "method": "nvmf_subsystem_add_host", 00:22:53.642 "params": { 00:22:53.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.642 "host": "nqn.2016-06.io.spdk:host1", 00:22:53.642 "psk": "/tmp/tmp.cnXFtFLDbA" 00:22:53.642 } 00:22:53.642 }, 00:22:53.642 { 00:22:53.642 "method": "nvmf_subsystem_add_ns", 00:22:53.642 "params": { 00:22:53.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.642 "namespace": { 00:22:53.642 "nsid": 1, 00:22:53.642 "bdev_name": "malloc0", 00:22:53.642 "nguid": "4DE2D8622DDC47DB812124EED2BB0832", 00:22:53.642 "uuid": "4de2d862-2ddc-47db-8121-24eed2bb0832", 00:22:53.642 "no_auto_visible": false 00:22:53.642 } 00:22:53.642 } 00:22:53.642 }, 00:22:53.642 { 00:22:53.642 "method": "nvmf_subsystem_add_listener", 00:22:53.642 "params": { 00:22:53.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.642 "listen_address": { 00:22:53.642 "trtype": "TCP", 00:22:53.642 "adrfam": "IPv4", 00:22:53.642 "traddr": "10.0.0.2", 00:22:53.642 "trsvcid": "4420" 00:22:53.642 }, 00:22:53.642 "secure_channel": true 00:22:53.642 } 00:22:53.642 } 00:22:53.642 ] 00:22:53.642 } 00:22:53.642 ] 00:22:53.642 }' 00:22:53.642 17:58:28 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:53.901 17:58:28 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:53.901 "subsystems": [ 00:22:53.901 { 00:22:53.901 "subsystem": "keyring", 00:22:53.901 "config": [] 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "subsystem": "iobuf", 00:22:53.901 "config": [ 00:22:53.901 { 00:22:53.901 "method": "iobuf_set_options", 00:22:53.901 "params": { 00:22:53.901 "small_pool_count": 8192, 00:22:53.901 "large_pool_count": 1024, 00:22:53.901 "small_bufsize": 8192, 00:22:53.901 "large_bufsize": 135168 00:22:53.901 } 00:22:53.901 } 00:22:53.901 ] 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "subsystem": "sock", 00:22:53.901 "config": [ 00:22:53.901 { 00:22:53.901 "method": "sock_set_default_impl", 00:22:53.901 "params": { 00:22:53.901 "impl_name": "posix" 00:22:53.901 } 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "method": "sock_impl_set_options", 00:22:53.901 "params": { 00:22:53.901 "impl_name": "ssl", 00:22:53.901 "recv_buf_size": 4096, 00:22:53.901 "send_buf_size": 4096, 00:22:53.901 "enable_recv_pipe": true, 00:22:53.901 "enable_quickack": false, 00:22:53.901 "enable_placement_id": 0, 00:22:53.901 "enable_zerocopy_send_server": true, 00:22:53.901 "enable_zerocopy_send_client": false, 00:22:53.901 "zerocopy_threshold": 0, 00:22:53.901 "tls_version": 0, 00:22:53.901 "enable_ktls": false 00:22:53.901 } 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "method": "sock_impl_set_options", 00:22:53.901 "params": { 00:22:53.901 "impl_name": "posix", 00:22:53.901 "recv_buf_size": 2097152, 00:22:53.901 "send_buf_size": 2097152, 00:22:53.901 "enable_recv_pipe": true, 00:22:53.901 "enable_quickack": false, 00:22:53.901 "enable_placement_id": 0, 00:22:53.901 "enable_zerocopy_send_server": true, 00:22:53.901 "enable_zerocopy_send_client": false, 00:22:53.901 "zerocopy_threshold": 0, 00:22:53.901 "tls_version": 0, 00:22:53.901 "enable_ktls": false 00:22:53.901 } 00:22:53.901 } 00:22:53.901 ] 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "subsystem": "vmd", 00:22:53.901 "config": [] 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "subsystem": "accel", 00:22:53.901 "config": [ 00:22:53.901 { 00:22:53.901 "method": "accel_set_options", 00:22:53.901 "params": { 00:22:53.901 "small_cache_size": 128, 00:22:53.901 "large_cache_size": 16, 00:22:53.901 "task_count": 2048, 00:22:53.901 "sequence_count": 2048, 00:22:53.901 "buf_count": 2048 00:22:53.901 } 00:22:53.901 } 00:22:53.901 ] 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "subsystem": "bdev", 00:22:53.901 "config": [ 00:22:53.901 { 00:22:53.901 "method": "bdev_set_options", 00:22:53.901 "params": { 00:22:53.901 "bdev_io_pool_size": 65535, 00:22:53.901 "bdev_io_cache_size": 256, 00:22:53.901 "bdev_auto_examine": true, 00:22:53.901 "iobuf_small_cache_size": 128, 00:22:53.901 "iobuf_large_cache_size": 16 00:22:53.901 } 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "method": "bdev_raid_set_options", 00:22:53.901 "params": { 00:22:53.901 "process_window_size_kb": 1024 00:22:53.901 } 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "method": "bdev_iscsi_set_options", 00:22:53.901 "params": { 00:22:53.901 "timeout_sec": 30 00:22:53.901 } 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "method": "bdev_nvme_set_options", 00:22:53.901 "params": { 00:22:53.901 "action_on_timeout": "none", 00:22:53.901 "timeout_us": 0, 00:22:53.901 "timeout_admin_us": 0, 00:22:53.901 "keep_alive_timeout_ms": 10000, 00:22:53.901 "arbitration_burst": 0, 00:22:53.901 "low_priority_weight": 0, 00:22:53.901 "medium_priority_weight": 0, 00:22:53.901 "high_priority_weight": 0, 00:22:53.901 "nvme_adminq_poll_period_us": 10000, 00:22:53.901 "nvme_ioq_poll_period_us": 0, 00:22:53.901 "io_queue_requests": 512, 00:22:53.901 "delay_cmd_submit": true, 00:22:53.901 "transport_retry_count": 4, 00:22:53.901 "bdev_retry_count": 3, 00:22:53.901 "transport_ack_timeout": 0, 00:22:53.901 "ctrlr_loss_timeout_sec": 0, 00:22:53.901 "reconnect_delay_sec": 0, 00:22:53.901 "fast_io_fail_timeout_sec": 0, 00:22:53.901 "disable_auto_failback": false, 00:22:53.901 "generate_uuids": false, 00:22:53.901 "transport_tos": 0, 00:22:53.901 "nvme_error_stat": false, 00:22:53.901 "rdma_srq_size": 0, 00:22:53.901 "io_path_stat": false, 00:22:53.901 "allow_accel_sequence": false, 00:22:53.901 "rdma_max_cq_size": 0, 00:22:53.901 "rdma_cm_event_timeout_ms": 0, 00:22:53.901 "dhchap_digests": [ 00:22:53.901 "sha256", 00:22:53.901 "sha384", 00:22:53.901 "sha512" 00:22:53.901 ], 00:22:53.901 "dhchap_dhgroups": [ 00:22:53.901 "null", 00:22:53.901 "ffdhe2048", 00:22:53.901 "ffdhe3072", 00:22:53.901 "ffdhe4096", 00:22:53.901 "ffdhe6144", 00:22:53.901 "ffdhe8192" 00:22:53.901 ] 00:22:53.901 } 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "method": "bdev_nvme_attach_controller", 00:22:53.901 "params": { 00:22:53.901 "name": "TLSTEST", 00:22:53.901 "trtype": "TCP", 00:22:53.901 "adrfam": "IPv4", 00:22:53.901 "traddr": "10.0.0.2", 00:22:53.901 "trsvcid": "4420", 00:22:53.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.901 "prchk_reftag": false, 00:22:53.901 "prchk_guard": false, 00:22:53.901 "ctrlr_loss_timeout_sec": 0, 00:22:53.901 "reconnect_delay_sec": 0, 00:22:53.901 "fast_io_fail_timeout_sec": 0, 00:22:53.901 "psk": "/tmp/tmp.cnXFtFLDbA", 00:22:53.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.901 "hdgst": false, 00:22:53.901 "ddgst": false 00:22:53.901 } 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "method": "bdev_nvme_set_hotplug", 00:22:53.901 "params": { 00:22:53.901 "period_us": 100000, 00:22:53.901 "enable": false 00:22:53.901 } 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "method": "bdev_wait_for_examine" 00:22:53.901 } 00:22:53.901 ] 00:22:53.901 }, 00:22:53.901 { 00:22:53.901 "subsystem": "nbd", 00:22:53.901 "config": [] 00:22:53.901 } 00:22:53.901 ] 00:22:53.901 }' 00:22:53.901 17:58:28 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 988416 00:22:53.901 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 988416 ']' 00:22:53.901 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 988416 00:22:53.901 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:53.901 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:53.901 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 988416 00:22:53.901 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:53.901 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:53.901 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 988416' 00:22:53.901 killing process with pid 988416 00:22:53.901 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 988416 00:22:53.901 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.901 00:22:53.901 Latency(us) 00:22:53.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.901 =================================================================================================================== 00:22:53.901 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.901 [2024-07-20 17:58:28.500526] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:53.901 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 988416 00:22:54.159 17:58:28 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 988215 00:22:54.159 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 988215 ']' 00:22:54.159 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 988215 00:22:54.159 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:54.159 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:54.159 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 988215 00:22:54.159 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:54.159 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:54.159 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 988215' 00:22:54.159 killing process with pid 988215 00:22:54.159 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 988215 00:22:54.159 [2024-07-20 17:58:28.730146] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:54.159 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 988215 00:22:54.418 17:58:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:54.418 17:58:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.418 17:58:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:54.418 "subsystems": [ 00:22:54.418 { 00:22:54.418 "subsystem": "keyring", 00:22:54.418 "config": [] 00:22:54.418 }, 00:22:54.418 { 00:22:54.418 "subsystem": "iobuf", 00:22:54.418 "config": [ 00:22:54.418 { 00:22:54.418 "method": "iobuf_set_options", 00:22:54.418 "params": { 00:22:54.418 "small_pool_count": 8192, 00:22:54.418 "large_pool_count": 1024, 00:22:54.418 "small_bufsize": 8192, 00:22:54.418 "large_bufsize": 135168 00:22:54.418 } 00:22:54.418 } 00:22:54.418 ] 00:22:54.418 }, 00:22:54.418 { 00:22:54.418 "subsystem": "sock", 00:22:54.418 "config": [ 00:22:54.418 { 00:22:54.418 "method": "sock_set_default_impl", 00:22:54.418 "params": { 00:22:54.418 "impl_name": "posix" 00:22:54.418 } 00:22:54.418 }, 00:22:54.418 { 00:22:54.418 "method": "sock_impl_set_options", 00:22:54.418 "params": { 00:22:54.418 "impl_name": "ssl", 00:22:54.418 "recv_buf_size": 4096, 00:22:54.418 "send_buf_size": 4096, 00:22:54.418 "enable_recv_pipe": true, 00:22:54.418 "enable_quickack": false, 00:22:54.418 "enable_placement_id": 0, 00:22:54.418 "enable_zerocopy_send_server": true, 00:22:54.418 "enable_zerocopy_send_client": false, 00:22:54.418 "zerocopy_threshold": 0, 00:22:54.418 "tls_version": 0, 00:22:54.418 "enable_ktls": false 00:22:54.418 } 00:22:54.418 }, 00:22:54.418 { 00:22:54.418 "method": "sock_impl_set_options", 00:22:54.418 "params": { 00:22:54.418 "impl_name": "posix", 00:22:54.418 "recv_buf_size": 2097152, 00:22:54.418 "send_buf_size": 2097152, 00:22:54.418 "enable_recv_pipe": true, 00:22:54.418 "enable_quickack": false, 00:22:54.418 "enable_placement_id": 0, 00:22:54.418 "enable_zerocopy_send_server": true, 00:22:54.418 "enable_zerocopy_send_client": false, 00:22:54.418 "zerocopy_threshold": 0, 00:22:54.418 "tls_version": 0, 00:22:54.418 "enable_ktls": false 00:22:54.418 } 00:22:54.418 } 00:22:54.418 ] 00:22:54.418 }, 00:22:54.418 { 00:22:54.418 "subsystem": "vmd", 00:22:54.418 "config": [] 00:22:54.418 }, 00:22:54.418 { 00:22:54.418 "subsystem": "accel", 00:22:54.418 "config": [ 00:22:54.418 { 00:22:54.418 "method": "accel_set_options", 00:22:54.418 "params": { 00:22:54.418 "small_cache_size": 128, 00:22:54.418 "large_cache_size": 16, 00:22:54.418 "task_count": 2048, 00:22:54.418 "sequence_count": 2048, 00:22:54.418 "buf_count": 2048 00:22:54.418 } 00:22:54.418 } 00:22:54.418 ] 00:22:54.418 }, 00:22:54.418 { 00:22:54.418 "subsystem": "bdev", 00:22:54.418 "config": [ 00:22:54.418 { 00:22:54.418 "method": "bdev_set_options", 00:22:54.418 "params": { 00:22:54.418 "bdev_io_pool_size": 65535, 00:22:54.418 "bdev_io_cache_size": 256, 00:22:54.418 "bdev_auto_examine": true, 00:22:54.418 "iobuf_small_cache_size": 128, 00:22:54.418 "iobuf_large_cache_size": 16 00:22:54.418 } 00:22:54.418 }, 00:22:54.418 { 00:22:54.418 "method": "bdev_raid_set_options", 00:22:54.418 "params": { 00:22:54.418 "process_window_size_kb": 1024 00:22:54.418 } 00:22:54.418 }, 00:22:54.418 { 00:22:54.418 "method": "bdev_iscsi_set_options", 00:22:54.418 "params": { 00:22:54.419 "timeout_sec": 30 00:22:54.419 } 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "method": "bdev_nvme_set_options", 00:22:54.419 "params": { 00:22:54.419 "action_on_timeout": "none", 00:22:54.419 "timeout_us": 0, 00:22:54.419 "timeout_admin_us": 0, 00:22:54.419 "keep_alive_timeout_ms": 10000, 00:22:54.419 "arbitration_burst": 0, 00:22:54.419 "low_priority_weight": 0, 00:22:54.419 "medium_priority_weight": 0, 00:22:54.419 "high_priority_weight": 0, 00:22:54.419 "nvme_adminq_poll_period_us": 10000, 00:22:54.419 "nvme_ioq_poll_period_us": 0, 00:22:54.419 "io_queue_requests": 0, 00:22:54.419 "delay_cmd_submit": true, 00:22:54.419 "transport_retry_count": 4, 00:22:54.419 "bdev_retry_count": 3, 00:22:54.419 "transport_ack_timeout": 0, 00:22:54.419 "ctrlr_loss_timeout_sec": 0, 00:22:54.419 "reconnect_delay_sec": 0, 00:22:54.419 "fast_io_fail_timeout_sec": 0, 00:22:54.419 "disable_auto_failback": false, 00:22:54.419 "generate_uuids": false, 00:22:54.419 "transport_tos": 0, 00:22:54.419 "nvme_error_stat": false, 00:22:54.419 "rdma_srq_size": 0, 00:22:54.419 "io_path_stat": false, 00:22:54.419 "allow_accel_sequence": false, 00:22:54.419 "rdma_max_cq_size": 0, 00:22:54.419 "rdma_cm_event_timeout_ms": 0, 00:22:54.419 "dhchap_digests": [ 00:22:54.419 "sha256", 00:22:54.419 "sha384", 00:22:54.419 "sha512" 00:22:54.419 ], 00:22:54.419 "dhchap_dhgroups": [ 00:22:54.419 "null", 00:22:54.419 "ffdhe2048", 00:22:54.419 "ffdhe3072", 00:22:54.419 "ffdhe4096", 00:22:54.419 "ffdhe6144", 00:22:54.419 "ffdhe8192" 00:22:54.419 ] 00:22:54.419 } 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "method": "bdev_nvme_set_hotplug", 00:22:54.419 "params": { 00:22:54.419 "period_us": 100000, 00:22:54.419 "enable": false 00:22:54.419 } 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "method": "bdev_malloc_create", 00:22:54.419 "params": { 00:22:54.419 "name": "malloc0", 00:22:54.419 "num_blocks": 8192, 00:22:54.419 "block_size": 4096, 00:22:54.419 "physical_block_size": 4096, 00:22:54.419 "uuid": "4de2d862-2ddc-47db-8121-24eed2bb0832", 00:22:54.419 "optimal_io_boundary": 0 00:22:54.419 } 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "method": "bdev_wait_for_examine" 00:22:54.419 } 00:22:54.419 ] 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "subsystem": "nbd", 00:22:54.419 "config": [] 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "subsystem": "scheduler", 00:22:54.419 "config": [ 00:22:54.419 { 00:22:54.419 "method": "framework_set_scheduler", 00:22:54.419 "params": { 00:22:54.419 "name": "static" 00:22:54.419 } 00:22:54.419 } 00:22:54.419 ] 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "subsystem": "nvmf", 00:22:54.419 "config": [ 00:22:54.419 { 00:22:54.419 "method": "nvmf_set_config", 00:22:54.419 "params": { 00:22:54.419 "discovery_filter": "match_any", 00:22:54.419 "admin_cmd_passthru": { 00:22:54.419 "identify_ctrlr": false 00:22:54.419 } 00:22:54.419 } 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "method": "nvmf_set_max_subsystems", 00:22:54.419 "params": { 00:22:54.419 "max_subsystems": 1024 00:22:54.419 } 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "method": "nvmf_set_crdt", 00:22:54.419 "params": { 00:22:54.419 "crdt1": 0, 00:22:54.419 "crdt2": 0, 00:22:54.419 "crdt3": 0 00:22:54.419 } 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "method": "nvmf_create_transport", 00:22:54.419 "params": { 00:22:54.419 "trtype": "TCP", 00:22:54.419 "max_queue_depth": 128, 00:22:54.419 "max_io_qpairs_per_ctrlr": 127, 00:22:54.419 "in_capsule_data_size": 4096, 00:22:54.419 "max_io_size": 131072, 00:22:54.419 "io_unit_size": 131072, 00:22:54.419 "max_aq_depth": 128, 00:22:54.419 "num_shared_buffers": 511, 00:22:54.419 "buf_cache_size": 4294967295, 00:22:54.419 "dif_insert_or_strip": false, 00:22:54.419 "zcopy": false, 00:22:54.419 "c2h_success": false, 00:22:54.419 "sock_priority": 0, 00:22:54.419 "abort_timeout_sec": 1, 00:22:54.419 "ack_timeout": 0, 00:22:54.419 "data_wr_pool_size": 0 00:22:54.419 } 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "method": "nvmf_create_subsystem", 00:22:54.419 "params": { 00:22:54.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.419 "allow_any_host": false, 00:22:54.419 "serial_number": "SPDK00000000000001", 00:22:54.419 "model_number": "SPDK bdev Controller", 00:22:54.419 "max_namespaces": 10, 00:22:54.419 "min_cntlid": 1, 00:22:54.419 "max_cntlid": 65519, 00:22:54.419 "ana_reporting": false 00:22:54.419 } 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "method": "nvmf_subsystem_add_host", 00:22:54.419 "params": { 00:22:54.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.419 "host": "nqn.2016-06.io.spdk:host1", 00:22:54.419 "psk": "/tmp/tmp.cnXFtFLDbA" 00:22:54.419 } 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "method": "nvmf_subsystem_add_ns", 00:22:54.419 "params": { 00:22:54.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.419 "namespace": { 00:22:54.419 "nsid": 1, 00:22:54.419 "bdev_name": "malloc0", 00:22:54.419 "nguid": "4DE2D8622DDC47DB812124EED2BB0832", 00:22:54.419 "uuid": "4de2d862-2ddc-47db-8121-24eed2bb0832", 00:22:54.419 "no_auto_visible": false 00:22:54.419 } 00:22:54.419 } 00:22:54.419 }, 00:22:54.419 { 00:22:54.419 "method": "nvmf_subsystem_add_listener", 00:22:54.419 "params": { 00:22:54.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.419 "listen_address": { 00:22:54.419 "trtype": "TCP", 00:22:54.419 "adrfam": "IPv4", 00:22:54.419 "traddr": "10.0.0.2", 00:22:54.419 "trsvcid": "4420" 00:22:54.419 }, 00:22:54.419 "secure_channel": true 00:22:54.419 } 00:22:54.419 } 00:22:54.419 ] 00:22:54.419 } 00:22:54.419 ] 00:22:54.419 }' 00:22:54.419 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:54.419 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.419 17:58:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=988683 00:22:54.419 17:58:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:54.419 17:58:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 988683 00:22:54.419 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 988683 ']' 00:22:54.419 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.419 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:54.419 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.419 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:54.419 17:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.419 [2024-07-20 17:58:29.008143] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:54.419 [2024-07-20 17:58:29.008233] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.419 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.419 [2024-07-20 17:58:29.075710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.419 [2024-07-20 17:58:29.163899] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.419 [2024-07-20 17:58:29.163963] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.419 [2024-07-20 17:58:29.163989] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.419 [2024-07-20 17:58:29.164004] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.419 [2024-07-20 17:58:29.164016] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.419 [2024-07-20 17:58:29.164109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.677 [2024-07-20 17:58:29.396591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.677 [2024-07-20 17:58:29.412542] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:54.677 [2024-07-20 17:58:29.428590] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:54.677 [2024-07-20 17:58:29.443021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=988841 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 988841 /var/tmp/bdevperf.sock 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 988841 ']' 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:55.244 17:58:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:55.244 "subsystems": [ 00:22:55.244 { 00:22:55.244 "subsystem": "keyring", 00:22:55.244 "config": [] 00:22:55.244 }, 00:22:55.244 { 00:22:55.244 "subsystem": "iobuf", 00:22:55.244 "config": [ 00:22:55.244 { 00:22:55.244 "method": "iobuf_set_options", 00:22:55.244 "params": { 00:22:55.244 "small_pool_count": 8192, 00:22:55.244 "large_pool_count": 1024, 00:22:55.244 "small_bufsize": 8192, 00:22:55.244 "large_bufsize": 135168 00:22:55.244 } 00:22:55.244 } 00:22:55.244 ] 00:22:55.244 }, 00:22:55.244 { 00:22:55.244 "subsystem": "sock", 00:22:55.244 "config": [ 00:22:55.244 { 00:22:55.244 "method": "sock_set_default_impl", 00:22:55.244 "params": { 00:22:55.244 "impl_name": "posix" 00:22:55.244 } 00:22:55.244 }, 00:22:55.244 { 00:22:55.244 "method": "sock_impl_set_options", 00:22:55.244 "params": { 00:22:55.244 "impl_name": "ssl", 00:22:55.244 "recv_buf_size": 4096, 00:22:55.244 "send_buf_size": 4096, 00:22:55.244 "enable_recv_pipe": true, 00:22:55.244 "enable_quickack": false, 00:22:55.244 "enable_placement_id": 0, 00:22:55.244 "enable_zerocopy_send_server": true, 00:22:55.244 "enable_zerocopy_send_client": false, 00:22:55.244 "zerocopy_threshold": 0, 00:22:55.244 "tls_version": 0, 00:22:55.244 "enable_ktls": false 00:22:55.244 } 00:22:55.244 }, 00:22:55.244 { 00:22:55.244 "method": "sock_impl_set_options", 00:22:55.244 "params": { 00:22:55.244 "impl_name": "posix", 00:22:55.244 "recv_buf_size": 2097152, 00:22:55.244 "send_buf_size": 2097152, 00:22:55.244 "enable_recv_pipe": true, 00:22:55.244 "enable_quickack": false, 00:22:55.244 "enable_placement_id": 0, 00:22:55.244 "enable_zerocopy_send_server": true, 00:22:55.244 "enable_zerocopy_send_client": false, 00:22:55.244 "zerocopy_threshold": 0, 00:22:55.244 "tls_version": 0, 00:22:55.244 "enable_ktls": false 00:22:55.244 } 00:22:55.244 } 00:22:55.244 ] 00:22:55.244 }, 00:22:55.244 { 00:22:55.244 "subsystem": "vmd", 00:22:55.244 "config": [] 00:22:55.244 }, 00:22:55.244 { 00:22:55.244 "subsystem": "accel", 00:22:55.244 "config": [ 00:22:55.244 { 00:22:55.244 "method": "accel_set_options", 00:22:55.244 "params": { 00:22:55.244 "small_cache_size": 128, 00:22:55.244 "large_cache_size": 16, 00:22:55.244 "task_count": 2048, 00:22:55.244 "sequence_count": 2048, 00:22:55.244 "buf_count": 2048 00:22:55.244 } 00:22:55.244 } 00:22:55.244 ] 00:22:55.244 }, 00:22:55.244 { 00:22:55.244 "subsystem": "bdev", 00:22:55.244 "config": [ 00:22:55.244 { 00:22:55.244 "method": "bdev_set_options", 00:22:55.244 "params": { 00:22:55.244 "bdev_io_pool_size": 65535, 00:22:55.244 "bdev_io_cache_size": 256, 00:22:55.244 "bdev_auto_examine": true, 00:22:55.244 "iobuf_small_cache_size": 128, 00:22:55.244 "iobuf_large_cache_size": 16 00:22:55.244 } 00:22:55.244 }, 00:22:55.244 { 00:22:55.244 "method": "bdev_raid_set_options", 00:22:55.244 "params": { 00:22:55.244 "process_window_size_kb": 1024 00:22:55.244 } 00:22:55.244 }, 00:22:55.244 { 00:22:55.244 "method": "bdev_iscsi_set_options", 00:22:55.244 "params": { 00:22:55.244 "timeout_sec": 30 00:22:55.244 } 00:22:55.244 }, 00:22:55.244 { 00:22:55.244 "method": "bdev_nvme_set_options", 00:22:55.244 "params": { 00:22:55.244 "action_on_timeout": "none", 00:22:55.244 "timeout_us": 0, 00:22:55.244 "timeout_admin_us": 0, 00:22:55.244 "keep_alive_timeout_ms": 10000, 00:22:55.244 "arbitration_burst": 0, 00:22:55.244 "low_priority_weight": 0, 00:22:55.244 "medium_priority_weight": 0, 00:22:55.244 "high_priority_weight": 0, 00:22:55.244 "nvme_adminq_poll_period_us": 10000, 00:22:55.244 "nvme_ioq_poll_period_us": 0, 00:22:55.244 "io_queue_requests": 512, 00:22:55.244 "delay_cmd_submit": true, 00:22:55.244 "transport_retry_count": 4, 00:22:55.244 "bdev_retry_count": 3, 00:22:55.244 "transport_ack_timeout": 0, 00:22:55.244 "ctrlr_loss_timeout_sec": 0, 00:22:55.244 "reconnect_delay_sec": 0, 00:22:55.244 "fast_io_fail_timeout_sec": 0, 00:22:55.244 "disable_auto_failback": false, 00:22:55.244 "generate_uuids": false, 00:22:55.244 "transport_tos": 0, 00:22:55.244 "nvme_error_stat": false, 00:22:55.244 "rdma_srq_size": 0, 00:22:55.244 "io_path_stat": false, 00:22:55.244 "allow_accel_sequence": false, 00:22:55.244 "rdma_max_cq_size": 0, 00:22:55.244 "rdma_cm_event_timeout_ms": 0, 00:22:55.244 "dhchap_digests": [ 00:22:55.244 "sha256", 00:22:55.244 "sha384", 00:22:55.244 "sha512" 00:22:55.244 ], 00:22:55.244 "dhchap_dhgroups": [ 00:22:55.244 "null", 00:22:55.244 "ffdhe2048", 00:22:55.244 "ffdhe3072", 00:22:55.245 "ffdhe4096", 00:22:55.245 "ffdhe6144", 00:22:55.245 "ffdhe8192" 00:22:55.245 ] 00:22:55.245 } 00:22:55.245 }, 00:22:55.245 { 00:22:55.245 "method": "bdev_nvme_attach_controller", 00:22:55.245 "params": { 00:22:55.245 "name": "TLSTEST", 00:22:55.245 "trtype": "TCP", 00:22:55.245 "adrfam": "IPv4", 00:22:55.245 "traddr": "10.0.0.2", 00:22:55.245 "trsvcid": "4420", 00:22:55.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.245 "prchk_reftag": false, 00:22:55.245 "prchk_guard": false, 00:22:55.245 "ctrlr_loss_timeout_sec": 0, 00:22:55.245 "reconnect_delay_sec": 0, 00:22:55.245 "fast_io_fail_timeout_sec": 0, 00:22:55.245 "psk": "/tmp/tmp.cnXFtFLDbA", 00:22:55.245 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.245 "hdgst": false, 00:22:55.245 "ddgst": false 00:22:55.245 } 00:22:55.245 }, 00:22:55.245 { 00:22:55.245 "method": "bdev_nvme_set_hotplug", 00:22:55.245 "params": { 00:22:55.245 "period_us": 100000, 00:22:55.245 "enable": false 00:22:55.245 } 00:22:55.245 }, 00:22:55.245 { 00:22:55.245 "method": "bdev_wait_for_examine" 00:22:55.245 } 00:22:55.245 ] 00:22:55.245 }, 00:22:55.245 { 00:22:55.245 "subsystem": "nbd", 00:22:55.245 "config": [] 00:22:55.245 } 00:22:55.245 ] 00:22:55.245 }' 00:22:55.245 17:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.245 [2024-07-20 17:58:30.035376] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:55.245 [2024-07-20 17:58:30.035512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988841 ] 00:22:55.503 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.503 [2024-07-20 17:58:30.104549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.503 [2024-07-20 17:58:30.189870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.761 [2024-07-20 17:58:30.346994] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.761 [2024-07-20 17:58:30.347132] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:56.327 17:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:56.327 17:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:56.327 17:58:31 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:56.327 Running I/O for 10 seconds... 00:23:08.541 00:23:08.541 Latency(us) 00:23:08.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.541 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.541 Verification LBA range: start 0x0 length 0x2000 00:23:08.541 TLSTESTn1 : 10.11 881.75 3.44 0.00 0.00 144514.55 8592.50 211268.65 00:23:08.541 =================================================================================================================== 00:23:08.541 Total : 881.75 3.44 0.00 0.00 144514.55 8592.50 211268.65 00:23:08.541 0 00:23:08.541 17:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:08.541 17:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 988841 00:23:08.541 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 988841 ']' 00:23:08.541 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 988841 00:23:08.541 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:08.541 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:08.541 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 988841 00:23:08.541 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:08.541 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:08.541 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 988841' 00:23:08.541 killing process with pid 988841 00:23:08.541 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 988841 00:23:08.541 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.541 00:23:08.541 Latency(us) 00:23:08.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.541 =================================================================================================================== 00:23:08.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.541 [2024-07-20 17:58:41.285200] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:08.541 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 988841 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 988683 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 988683 ']' 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 988683 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 988683 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 988683' 00:23:08.542 killing process with pid 988683 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 988683 00:23:08.542 [2024-07-20 17:58:41.502531] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 988683 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=990163 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 990163 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 990163 ']' 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:08.542 17:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.542 [2024-07-20 17:58:41.795198] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:08.542 [2024-07-20 17:58:41.795299] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.542 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.542 [2024-07-20 17:58:41.863167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.542 [2024-07-20 17:58:41.949908] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.542 [2024-07-20 17:58:41.949972] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.542 [2024-07-20 17:58:41.949997] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.542 [2024-07-20 17:58:41.950011] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.542 [2024-07-20 17:58:41.950023] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.542 [2024-07-20 17:58:41.950055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.542 17:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:08.542 17:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:08.542 17:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.542 17:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.542 17:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.542 17:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.542 17:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.cnXFtFLDbA 00:23:08.542 17:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cnXFtFLDbA 00:23:08.542 17:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:08.542 [2024-07-20 17:58:42.342327] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.542 17:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:08.542 17:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:08.542 [2024-07-20 17:58:42.923889] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:08.542 [2024-07-20 17:58:42.924153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.542 17:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:08.542 malloc0 00:23:08.542 17:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:08.799 17:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cnXFtFLDbA 00:23:09.055 [2024-07-20 17:58:43.720657] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:09.055 17:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=990447 00:23:09.055 17:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:09.055 17:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.055 17:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 990447 /var/tmp/bdevperf.sock 00:23:09.055 17:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 990447 ']' 00:23:09.055 17:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.055 17:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:09.055 17:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.055 17:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:09.055 17:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.055 [2024-07-20 17:58:43.782407] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:09.055 [2024-07-20 17:58:43.782493] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990447 ] 00:23:09.055 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.055 [2024-07-20 17:58:43.844370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.312 [2024-07-20 17:58:43.934679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.312 17:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:09.312 17:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:09.312 17:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cnXFtFLDbA 00:23:09.570 17:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:09.827 [2024-07-20 17:58:44.587533] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.085 nvme0n1 00:23:10.085 17:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:10.085 Running I/O for 1 seconds... 00:23:11.490 00:23:11.490 Latency(us) 00:23:11.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.490 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:11.490 Verification LBA range: start 0x0 length 0x2000 00:23:11.490 nvme0n1 : 1.12 676.05 2.64 0.00 0.00 182419.76 9660.49 203501.42 00:23:11.490 =================================================================================================================== 00:23:11.490 Total : 676.05 2.64 0.00 0.00 182419.76 9660.49 203501.42 00:23:11.490 0 00:23:11.490 17:58:45 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 990447 00:23:11.490 17:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 990447 ']' 00:23:11.490 17:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 990447 00:23:11.490 17:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:11.490 17:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:11.490 17:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 990447 00:23:11.490 17:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:11.490 17:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:11.490 17:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 990447' 00:23:11.490 killing process with pid 990447 00:23:11.490 17:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 990447 00:23:11.491 Received shutdown signal, test time was about 1.000000 seconds 00:23:11.491 00:23:11.491 Latency(us) 00:23:11.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.491 =================================================================================================================== 00:23:11.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:11.491 17:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 990447 00:23:11.491 17:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 990163 00:23:11.491 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 990163 ']' 00:23:11.491 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 990163 00:23:11.491 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:11.491 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:11.491 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 990163 00:23:11.491 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:11.491 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:11.491 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 990163' 00:23:11.491 killing process with pid 990163 00:23:11.491 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 990163 00:23:11.491 [2024-07-20 17:58:46.228176] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:11.491 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 990163 00:23:11.748 17:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:11.748 17:58:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.748 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:11.748 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.748 17:58:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=990733 00:23:11.748 17:58:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:11.748 17:58:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 990733 00:23:11.748 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 990733 ']' 00:23:11.748 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.748 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.748 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.748 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.748 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.748 [2024-07-20 17:58:46.519310] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:11.748 [2024-07-20 17:58:46.519384] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.006 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.006 [2024-07-20 17:58:46.581603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.006 [2024-07-20 17:58:46.664954] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.006 [2024-07-20 17:58:46.665009] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.006 [2024-07-20 17:58:46.665024] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.006 [2024-07-20 17:58:46.665036] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.006 [2024-07-20 17:58:46.665046] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.006 [2024-07-20 17:58:46.665088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.006 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.006 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:12.006 17:58:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.006 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.006 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.006 17:58:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.006 17:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:12.006 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.006 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.006 [2024-07-20 17:58:46.799402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.263 malloc0 00:23:12.263 [2024-07-20 17:58:46.830593] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:12.263 [2024-07-20 17:58:46.830900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.263 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.263 17:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=990873 00:23:12.263 17:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 990873 /var/tmp/bdevperf.sock 00:23:12.263 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 990873 ']' 00:23:12.263 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.263 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.263 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.263 17:58:46 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:12.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.263 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.263 17:58:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.263 [2024-07-20 17:58:46.900415] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:12.263 [2024-07-20 17:58:46.900502] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990873 ] 00:23:12.263 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.263 [2024-07-20 17:58:46.958863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.263 [2024-07-20 17:58:47.044183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.521 17:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.522 17:58:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:12.522 17:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cnXFtFLDbA 00:23:12.779 17:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:13.038 [2024-07-20 17:58:47.667341] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.038 nvme0n1 00:23:13.038 17:58:47 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:13.296 Running I/O for 1 seconds... 00:23:14.228 00:23:14.228 Latency(us) 00:23:14.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.228 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:14.228 Verification LBA range: start 0x0 length 0x2000 00:23:14.228 nvme0n1 : 1.12 655.72 2.56 0.00 0.00 188059.63 8009.96 209715.20 00:23:14.228 =================================================================================================================== 00:23:14.228 Total : 655.72 2.56 0.00 0.00 188059.63 8009.96 209715.20 00:23:14.228 0 00:23:14.228 17:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:14.228 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.228 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.516 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.516 17:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:14.516 "subsystems": [ 00:23:14.516 { 00:23:14.516 "subsystem": "keyring", 00:23:14.516 "config": [ 00:23:14.516 { 00:23:14.516 "method": "keyring_file_add_key", 00:23:14.516 "params": { 00:23:14.516 "name": "key0", 00:23:14.516 "path": "/tmp/tmp.cnXFtFLDbA" 00:23:14.516 } 00:23:14.516 } 00:23:14.516 ] 00:23:14.516 }, 00:23:14.516 { 00:23:14.516 "subsystem": "iobuf", 00:23:14.516 "config": [ 00:23:14.516 { 00:23:14.516 "method": "iobuf_set_options", 00:23:14.516 "params": { 00:23:14.516 "small_pool_count": 8192, 00:23:14.516 "large_pool_count": 1024, 00:23:14.516 "small_bufsize": 8192, 00:23:14.516 "large_bufsize": 135168 00:23:14.516 } 00:23:14.516 } 00:23:14.516 ] 00:23:14.516 }, 00:23:14.516 { 00:23:14.516 "subsystem": "sock", 00:23:14.516 "config": [ 00:23:14.516 { 00:23:14.516 "method": "sock_set_default_impl", 00:23:14.516 "params": { 00:23:14.516 "impl_name": "posix" 00:23:14.516 } 00:23:14.516 }, 00:23:14.516 { 00:23:14.516 "method": "sock_impl_set_options", 00:23:14.516 "params": { 00:23:14.516 "impl_name": "ssl", 00:23:14.516 "recv_buf_size": 4096, 00:23:14.516 "send_buf_size": 4096, 00:23:14.516 "enable_recv_pipe": true, 00:23:14.516 "enable_quickack": false, 00:23:14.516 "enable_placement_id": 0, 00:23:14.516 "enable_zerocopy_send_server": true, 00:23:14.516 "enable_zerocopy_send_client": false, 00:23:14.516 "zerocopy_threshold": 0, 00:23:14.516 "tls_version": 0, 00:23:14.516 "enable_ktls": false 00:23:14.516 } 00:23:14.516 }, 00:23:14.516 { 00:23:14.516 "method": "sock_impl_set_options", 00:23:14.516 "params": { 00:23:14.516 "impl_name": "posix", 00:23:14.516 "recv_buf_size": 2097152, 00:23:14.516 "send_buf_size": 2097152, 00:23:14.516 "enable_recv_pipe": true, 00:23:14.516 "enable_quickack": false, 00:23:14.516 "enable_placement_id": 0, 00:23:14.516 "enable_zerocopy_send_server": true, 00:23:14.516 "enable_zerocopy_send_client": false, 00:23:14.516 "zerocopy_threshold": 0, 00:23:14.516 "tls_version": 0, 00:23:14.516 "enable_ktls": false 00:23:14.516 } 00:23:14.516 } 00:23:14.516 ] 00:23:14.516 }, 00:23:14.516 { 00:23:14.516 "subsystem": "vmd", 00:23:14.516 "config": [] 00:23:14.516 }, 00:23:14.516 { 00:23:14.516 "subsystem": "accel", 00:23:14.516 "config": [ 00:23:14.516 { 00:23:14.516 "method": "accel_set_options", 00:23:14.516 "params": { 00:23:14.516 "small_cache_size": 128, 00:23:14.516 "large_cache_size": 16, 00:23:14.516 "task_count": 2048, 00:23:14.516 "sequence_count": 2048, 00:23:14.516 "buf_count": 2048 00:23:14.516 } 00:23:14.516 } 00:23:14.516 ] 00:23:14.516 }, 00:23:14.516 { 00:23:14.516 "subsystem": "bdev", 00:23:14.516 "config": [ 00:23:14.516 { 00:23:14.516 "method": "bdev_set_options", 00:23:14.516 "params": { 00:23:14.516 "bdev_io_pool_size": 65535, 00:23:14.516 "bdev_io_cache_size": 256, 00:23:14.516 "bdev_auto_examine": true, 00:23:14.516 "iobuf_small_cache_size": 128, 00:23:14.516 "iobuf_large_cache_size": 16 00:23:14.516 } 00:23:14.516 }, 00:23:14.516 { 00:23:14.516 "method": "bdev_raid_set_options", 00:23:14.516 "params": { 00:23:14.516 "process_window_size_kb": 1024 00:23:14.516 } 00:23:14.516 }, 00:23:14.516 { 00:23:14.516 "method": "bdev_iscsi_set_options", 00:23:14.516 "params": { 00:23:14.516 "timeout_sec": 30 00:23:14.516 } 00:23:14.516 }, 00:23:14.516 { 00:23:14.517 "method": "bdev_nvme_set_options", 00:23:14.517 "params": { 00:23:14.517 "action_on_timeout": "none", 00:23:14.517 "timeout_us": 0, 00:23:14.517 "timeout_admin_us": 0, 00:23:14.517 "keep_alive_timeout_ms": 10000, 00:23:14.517 "arbitration_burst": 0, 00:23:14.517 "low_priority_weight": 0, 00:23:14.517 "medium_priority_weight": 0, 00:23:14.517 "high_priority_weight": 0, 00:23:14.517 "nvme_adminq_poll_period_us": 10000, 00:23:14.517 "nvme_ioq_poll_period_us": 0, 00:23:14.517 "io_queue_requests": 0, 00:23:14.517 "delay_cmd_submit": true, 00:23:14.517 "transport_retry_count": 4, 00:23:14.517 "bdev_retry_count": 3, 00:23:14.517 "transport_ack_timeout": 0, 00:23:14.517 "ctrlr_loss_timeout_sec": 0, 00:23:14.517 "reconnect_delay_sec": 0, 00:23:14.517 "fast_io_fail_timeout_sec": 0, 00:23:14.517 "disable_auto_failback": false, 00:23:14.517 "generate_uuids": false, 00:23:14.517 "transport_tos": 0, 00:23:14.517 "nvme_error_stat": false, 00:23:14.517 "rdma_srq_size": 0, 00:23:14.517 "io_path_stat": false, 00:23:14.517 "allow_accel_sequence": false, 00:23:14.517 "rdma_max_cq_size": 0, 00:23:14.517 "rdma_cm_event_timeout_ms": 0, 00:23:14.517 "dhchap_digests": [ 00:23:14.517 "sha256", 00:23:14.517 "sha384", 00:23:14.517 "sha512" 00:23:14.517 ], 00:23:14.517 "dhchap_dhgroups": [ 00:23:14.517 "null", 00:23:14.517 "ffdhe2048", 00:23:14.517 "ffdhe3072", 00:23:14.517 "ffdhe4096", 00:23:14.517 "ffdhe6144", 00:23:14.517 "ffdhe8192" 00:23:14.517 ] 00:23:14.517 } 00:23:14.517 }, 00:23:14.517 { 00:23:14.517 "method": "bdev_nvme_set_hotplug", 00:23:14.517 "params": { 00:23:14.517 "period_us": 100000, 00:23:14.517 "enable": false 00:23:14.517 } 00:23:14.517 }, 00:23:14.517 { 00:23:14.517 "method": "bdev_malloc_create", 00:23:14.517 "params": { 00:23:14.517 "name": "malloc0", 00:23:14.517 "num_blocks": 8192, 00:23:14.517 "block_size": 4096, 00:23:14.517 "physical_block_size": 4096, 00:23:14.517 "uuid": "cc1491f2-f847-44eb-8dc6-7c6d6c3baab2", 00:23:14.517 "optimal_io_boundary": 0 00:23:14.517 } 00:23:14.517 }, 00:23:14.517 { 00:23:14.517 "method": "bdev_wait_for_examine" 00:23:14.517 } 00:23:14.517 ] 00:23:14.517 }, 00:23:14.517 { 00:23:14.517 "subsystem": "nbd", 00:23:14.517 "config": [] 00:23:14.517 }, 00:23:14.517 { 00:23:14.517 "subsystem": "scheduler", 00:23:14.517 "config": [ 00:23:14.517 { 00:23:14.517 "method": "framework_set_scheduler", 00:23:14.517 "params": { 00:23:14.517 "name": "static" 00:23:14.517 } 00:23:14.517 } 00:23:14.517 ] 00:23:14.517 }, 00:23:14.517 { 00:23:14.517 "subsystem": "nvmf", 00:23:14.517 "config": [ 00:23:14.517 { 00:23:14.517 "method": "nvmf_set_config", 00:23:14.517 "params": { 00:23:14.517 "discovery_filter": "match_any", 00:23:14.517 "admin_cmd_passthru": { 00:23:14.517 "identify_ctrlr": false 00:23:14.517 } 00:23:14.517 } 00:23:14.517 }, 00:23:14.517 { 00:23:14.517 "method": "nvmf_set_max_subsystems", 00:23:14.517 "params": { 00:23:14.517 "max_subsystems": 1024 00:23:14.517 } 00:23:14.517 }, 00:23:14.517 { 00:23:14.517 "method": "nvmf_set_crdt", 00:23:14.517 "params": { 00:23:14.517 "crdt1": 0, 00:23:14.517 "crdt2": 0, 00:23:14.517 "crdt3": 0 00:23:14.517 } 00:23:14.517 }, 00:23:14.517 { 00:23:14.517 "method": "nvmf_create_transport", 00:23:14.517 "params": { 00:23:14.517 "trtype": "TCP", 00:23:14.517 "max_queue_depth": 128, 00:23:14.517 "max_io_qpairs_per_ctrlr": 127, 00:23:14.517 "in_capsule_data_size": 4096, 00:23:14.517 "max_io_size": 131072, 00:23:14.517 "io_unit_size": 131072, 00:23:14.517 "max_aq_depth": 128, 00:23:14.517 "num_shared_buffers": 511, 00:23:14.517 "buf_cache_size": 4294967295, 00:23:14.517 "dif_insert_or_strip": false, 00:23:14.517 "zcopy": false, 00:23:14.517 "c2h_success": false, 00:23:14.517 "sock_priority": 0, 00:23:14.517 "abort_timeout_sec": 1, 00:23:14.517 "ack_timeout": 0, 00:23:14.517 "data_wr_pool_size": 0 00:23:14.517 } 00:23:14.517 }, 00:23:14.517 { 00:23:14.517 "method": "nvmf_create_subsystem", 00:23:14.517 "params": { 00:23:14.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.517 "allow_any_host": false, 00:23:14.517 "serial_number": "00000000000000000000", 00:23:14.517 "model_number": "SPDK bdev Controller", 00:23:14.517 "max_namespaces": 32, 00:23:14.517 "min_cntlid": 1, 00:23:14.517 "max_cntlid": 65519, 00:23:14.517 "ana_reporting": false 00:23:14.517 } 00:23:14.517 }, 00:23:14.517 { 00:23:14.517 "method": "nvmf_subsystem_add_host", 00:23:14.517 "params": { 00:23:14.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.517 "host": "nqn.2016-06.io.spdk:host1", 00:23:14.517 "psk": "key0" 00:23:14.517 } 00:23:14.517 }, 00:23:14.517 { 00:23:14.517 "method": "nvmf_subsystem_add_ns", 00:23:14.517 "params": { 00:23:14.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.517 "namespace": { 00:23:14.517 "nsid": 1, 00:23:14.517 "bdev_name": "malloc0", 00:23:14.517 "nguid": "CC1491F2F84744EB8DC67C6D6C3BAAB2", 00:23:14.517 "uuid": "cc1491f2-f847-44eb-8dc6-7c6d6c3baab2", 00:23:14.517 "no_auto_visible": false 00:23:14.517 } 00:23:14.517 } 00:23:14.517 }, 00:23:14.517 { 00:23:14.517 "method": "nvmf_subsystem_add_listener", 00:23:14.517 "params": { 00:23:14.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.517 "listen_address": { 00:23:14.517 "trtype": "TCP", 00:23:14.517 "adrfam": "IPv4", 00:23:14.517 "traddr": "10.0.0.2", 00:23:14.517 "trsvcid": "4420" 00:23:14.517 }, 00:23:14.517 "secure_channel": true 00:23:14.517 } 00:23:14.517 } 00:23:14.517 ] 00:23:14.517 } 00:23:14.517 ] 00:23:14.517 }' 00:23:14.517 17:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:14.776 17:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:14.776 "subsystems": [ 00:23:14.776 { 00:23:14.776 "subsystem": "keyring", 00:23:14.776 "config": [ 00:23:14.776 { 00:23:14.776 "method": "keyring_file_add_key", 00:23:14.776 "params": { 00:23:14.776 "name": "key0", 00:23:14.776 "path": "/tmp/tmp.cnXFtFLDbA" 00:23:14.776 } 00:23:14.776 } 00:23:14.776 ] 00:23:14.776 }, 00:23:14.776 { 00:23:14.776 "subsystem": "iobuf", 00:23:14.776 "config": [ 00:23:14.776 { 00:23:14.776 "method": "iobuf_set_options", 00:23:14.776 "params": { 00:23:14.776 "small_pool_count": 8192, 00:23:14.776 "large_pool_count": 1024, 00:23:14.776 "small_bufsize": 8192, 00:23:14.776 "large_bufsize": 135168 00:23:14.776 } 00:23:14.776 } 00:23:14.776 ] 00:23:14.776 }, 00:23:14.776 { 00:23:14.776 "subsystem": "sock", 00:23:14.776 "config": [ 00:23:14.776 { 00:23:14.776 "method": "sock_set_default_impl", 00:23:14.776 "params": { 00:23:14.776 "impl_name": "posix" 00:23:14.776 } 00:23:14.776 }, 00:23:14.776 { 00:23:14.776 "method": "sock_impl_set_options", 00:23:14.776 "params": { 00:23:14.776 "impl_name": "ssl", 00:23:14.776 "recv_buf_size": 4096, 00:23:14.776 "send_buf_size": 4096, 00:23:14.776 "enable_recv_pipe": true, 00:23:14.776 "enable_quickack": false, 00:23:14.776 "enable_placement_id": 0, 00:23:14.776 "enable_zerocopy_send_server": true, 00:23:14.776 "enable_zerocopy_send_client": false, 00:23:14.776 "zerocopy_threshold": 0, 00:23:14.776 "tls_version": 0, 00:23:14.776 "enable_ktls": false 00:23:14.776 } 00:23:14.776 }, 00:23:14.776 { 00:23:14.776 "method": "sock_impl_set_options", 00:23:14.776 "params": { 00:23:14.776 "impl_name": "posix", 00:23:14.776 "recv_buf_size": 2097152, 00:23:14.776 "send_buf_size": 2097152, 00:23:14.776 "enable_recv_pipe": true, 00:23:14.776 "enable_quickack": false, 00:23:14.776 "enable_placement_id": 0, 00:23:14.776 "enable_zerocopy_send_server": true, 00:23:14.776 "enable_zerocopy_send_client": false, 00:23:14.776 "zerocopy_threshold": 0, 00:23:14.776 "tls_version": 0, 00:23:14.776 "enable_ktls": false 00:23:14.776 } 00:23:14.776 } 00:23:14.776 ] 00:23:14.776 }, 00:23:14.776 { 00:23:14.776 "subsystem": "vmd", 00:23:14.776 "config": [] 00:23:14.776 }, 00:23:14.776 { 00:23:14.776 "subsystem": "accel", 00:23:14.776 "config": [ 00:23:14.776 { 00:23:14.776 "method": "accel_set_options", 00:23:14.776 "params": { 00:23:14.776 "small_cache_size": 128, 00:23:14.776 "large_cache_size": 16, 00:23:14.776 "task_count": 2048, 00:23:14.776 "sequence_count": 2048, 00:23:14.776 "buf_count": 2048 00:23:14.776 } 00:23:14.776 } 00:23:14.776 ] 00:23:14.776 }, 00:23:14.776 { 00:23:14.776 "subsystem": "bdev", 00:23:14.776 "config": [ 00:23:14.776 { 00:23:14.776 "method": "bdev_set_options", 00:23:14.776 "params": { 00:23:14.776 "bdev_io_pool_size": 65535, 00:23:14.776 "bdev_io_cache_size": 256, 00:23:14.776 "bdev_auto_examine": true, 00:23:14.776 "iobuf_small_cache_size": 128, 00:23:14.776 "iobuf_large_cache_size": 16 00:23:14.776 } 00:23:14.776 }, 00:23:14.776 { 00:23:14.776 "method": "bdev_raid_set_options", 00:23:14.776 "params": { 00:23:14.776 "process_window_size_kb": 1024 00:23:14.776 } 00:23:14.776 }, 00:23:14.776 { 00:23:14.776 "method": "bdev_iscsi_set_options", 00:23:14.776 "params": { 00:23:14.776 "timeout_sec": 30 00:23:14.776 } 00:23:14.776 }, 00:23:14.776 { 00:23:14.776 "method": "bdev_nvme_set_options", 00:23:14.776 "params": { 00:23:14.776 "action_on_timeout": "none", 00:23:14.776 "timeout_us": 0, 00:23:14.776 "timeout_admin_us": 0, 00:23:14.776 "keep_alive_timeout_ms": 10000, 00:23:14.776 "arbitration_burst": 0, 00:23:14.776 "low_priority_weight": 0, 00:23:14.776 "medium_priority_weight": 0, 00:23:14.776 "high_priority_weight": 0, 00:23:14.776 "nvme_adminq_poll_period_us": 10000, 00:23:14.776 "nvme_ioq_poll_period_us": 0, 00:23:14.776 "io_queue_requests": 512, 00:23:14.776 "delay_cmd_submit": true, 00:23:14.776 "transport_retry_count": 4, 00:23:14.776 "bdev_retry_count": 3, 00:23:14.776 "transport_ack_timeout": 0, 00:23:14.776 "ctrlr_loss_timeout_sec": 0, 00:23:14.776 "reconnect_delay_sec": 0, 00:23:14.776 "fast_io_fail_timeout_sec": 0, 00:23:14.776 "disable_auto_failback": false, 00:23:14.776 "generate_uuids": false, 00:23:14.776 "transport_tos": 0, 00:23:14.776 "nvme_error_stat": false, 00:23:14.776 "rdma_srq_size": 0, 00:23:14.776 "io_path_stat": false, 00:23:14.776 "allow_accel_sequence": false, 00:23:14.776 "rdma_max_cq_size": 0, 00:23:14.776 "rdma_cm_event_timeout_ms": 0, 00:23:14.776 "dhchap_digests": [ 00:23:14.776 "sha256", 00:23:14.776 "sha384", 00:23:14.776 "sha512" 00:23:14.776 ], 00:23:14.776 "dhchap_dhgroups": [ 00:23:14.776 "null", 00:23:14.776 "ffdhe2048", 00:23:14.776 "ffdhe3072", 00:23:14.776 "ffdhe4096", 00:23:14.776 "ffdhe6144", 00:23:14.776 "ffdhe8192" 00:23:14.776 ] 00:23:14.776 } 00:23:14.776 }, 00:23:14.776 { 00:23:14.776 "method": "bdev_nvme_attach_controller", 00:23:14.776 "params": { 00:23:14.776 "name": "nvme0", 00:23:14.776 "trtype": "TCP", 00:23:14.776 "adrfam": "IPv4", 00:23:14.776 "traddr": "10.0.0.2", 00:23:14.776 "trsvcid": "4420", 00:23:14.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.776 "prchk_reftag": false, 00:23:14.776 "prchk_guard": false, 00:23:14.776 "ctrlr_loss_timeout_sec": 0, 00:23:14.776 "reconnect_delay_sec": 0, 00:23:14.776 "fast_io_fail_timeout_sec": 0, 00:23:14.776 "psk": "key0", 00:23:14.776 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.776 "hdgst": false, 00:23:14.776 "ddgst": false 00:23:14.776 } 00:23:14.776 }, 00:23:14.776 { 00:23:14.776 "method": "bdev_nvme_set_hotplug", 00:23:14.776 "params": { 00:23:14.776 "period_us": 100000, 00:23:14.776 "enable": false 00:23:14.776 } 00:23:14.776 }, 00:23:14.776 { 00:23:14.776 "method": "bdev_enable_histogram", 00:23:14.776 "params": { 00:23:14.776 "name": "nvme0n1", 00:23:14.776 "enable": true 00:23:14.777 } 00:23:14.777 }, 00:23:14.777 { 00:23:14.777 "method": "bdev_wait_for_examine" 00:23:14.777 } 00:23:14.777 ] 00:23:14.777 }, 00:23:14.777 { 00:23:14.777 "subsystem": "nbd", 00:23:14.777 "config": [] 00:23:14.777 } 00:23:14.777 ] 00:23:14.777 }' 00:23:14.777 17:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 990873 00:23:14.777 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 990873 ']' 00:23:14.777 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 990873 00:23:14.777 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:14.777 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:14.777 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 990873 00:23:14.777 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:14.777 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:14.777 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 990873' 00:23:14.777 killing process with pid 990873 00:23:14.777 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 990873 00:23:14.777 Received shutdown signal, test time was about 1.000000 seconds 00:23:14.777 00:23:14.777 Latency(us) 00:23:14.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.777 =================================================================================================================== 00:23:14.777 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.777 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 990873 00:23:15.034 17:58:49 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 990733 00:23:15.034 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 990733 ']' 00:23:15.034 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 990733 00:23:15.034 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:15.034 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.034 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 990733 00:23:15.034 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:15.034 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:15.034 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 990733' 00:23:15.034 killing process with pid 990733 00:23:15.034 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 990733 00:23:15.034 17:58:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 990733 00:23:15.292 17:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:15.292 17:58:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.292 17:58:50 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:15.292 "subsystems": [ 00:23:15.292 { 00:23:15.292 "subsystem": "keyring", 00:23:15.292 "config": [ 00:23:15.292 { 00:23:15.292 "method": "keyring_file_add_key", 00:23:15.292 "params": { 00:23:15.292 "name": "key0", 00:23:15.292 "path": "/tmp/tmp.cnXFtFLDbA" 00:23:15.292 } 00:23:15.292 } 00:23:15.292 ] 00:23:15.292 }, 00:23:15.292 { 00:23:15.292 "subsystem": "iobuf", 00:23:15.292 "config": [ 00:23:15.292 { 00:23:15.292 "method": "iobuf_set_options", 00:23:15.292 "params": { 00:23:15.292 "small_pool_count": 8192, 00:23:15.292 "large_pool_count": 1024, 00:23:15.292 "small_bufsize": 8192, 00:23:15.292 "large_bufsize": 135168 00:23:15.292 } 00:23:15.292 } 00:23:15.292 ] 00:23:15.292 }, 00:23:15.292 { 00:23:15.292 "subsystem": "sock", 00:23:15.292 "config": [ 00:23:15.292 { 00:23:15.292 "method": "sock_set_default_impl", 00:23:15.292 "params": { 00:23:15.292 "impl_name": "posix" 00:23:15.292 } 00:23:15.292 }, 00:23:15.292 { 00:23:15.292 "method": "sock_impl_set_options", 00:23:15.292 "params": { 00:23:15.292 "impl_name": "ssl", 00:23:15.292 "recv_buf_size": 4096, 00:23:15.292 "send_buf_size": 4096, 00:23:15.292 "enable_recv_pipe": true, 00:23:15.292 "enable_quickack": false, 00:23:15.292 "enable_placement_id": 0, 00:23:15.292 "enable_zerocopy_send_server": true, 00:23:15.292 "enable_zerocopy_send_client": false, 00:23:15.292 "zerocopy_threshold": 0, 00:23:15.292 "tls_version": 0, 00:23:15.292 "enable_ktls": false 00:23:15.292 } 00:23:15.292 }, 00:23:15.292 { 00:23:15.292 "method": "sock_impl_set_options", 00:23:15.292 "params": { 00:23:15.292 "impl_name": "posix", 00:23:15.292 "recv_buf_size": 2097152, 00:23:15.292 "send_buf_size": 2097152, 00:23:15.292 "enable_recv_pipe": true, 00:23:15.292 "enable_quickack": false, 00:23:15.292 "enable_placement_id": 0, 00:23:15.292 "enable_zerocopy_send_server": true, 00:23:15.292 "enable_zerocopy_send_client": false, 00:23:15.292 "zerocopy_threshold": 0, 00:23:15.292 "tls_version": 0, 00:23:15.292 "enable_ktls": false 00:23:15.292 } 00:23:15.292 } 00:23:15.292 ] 00:23:15.292 }, 00:23:15.292 { 00:23:15.292 "subsystem": "vmd", 00:23:15.292 "config": [] 00:23:15.292 }, 00:23:15.292 { 00:23:15.292 "subsystem": "accel", 00:23:15.292 "config": [ 00:23:15.292 { 00:23:15.292 "method": "accel_set_options", 00:23:15.292 "params": { 00:23:15.292 "small_cache_size": 128, 00:23:15.292 "large_cache_size": 16, 00:23:15.292 "task_count": 2048, 00:23:15.292 "sequence_count": 2048, 00:23:15.292 "buf_count": 2048 00:23:15.292 } 00:23:15.292 } 00:23:15.292 ] 00:23:15.292 }, 00:23:15.292 { 00:23:15.292 "subsystem": "bdev", 00:23:15.292 "config": [ 00:23:15.292 { 00:23:15.292 "method": "bdev_set_options", 00:23:15.292 "params": { 00:23:15.292 "bdev_io_pool_size": 65535, 00:23:15.292 "bdev_io_cache_size": 256, 00:23:15.292 "bdev_auto_examine": true, 00:23:15.292 "iobuf_small_cache_size": 128, 00:23:15.292 "iobuf_large_cache_size": 16 00:23:15.292 } 00:23:15.292 }, 00:23:15.292 { 00:23:15.292 "method": "bdev_raid_set_options", 00:23:15.292 "params": { 00:23:15.292 "process_window_size_kb": 1024 00:23:15.292 } 00:23:15.292 }, 00:23:15.292 { 00:23:15.292 "method": "bdev_iscsi_set_options", 00:23:15.292 "params": { 00:23:15.292 "timeout_sec": 30 00:23:15.292 } 00:23:15.292 }, 00:23:15.292 { 00:23:15.292 "method": "bdev_nvme_set_options", 00:23:15.292 "params": { 00:23:15.292 "action_on_timeout": "none", 00:23:15.292 "timeout_us": 0, 00:23:15.292 "timeout_admin_us": 0, 00:23:15.292 "keep_alive_timeout_ms": 10000, 00:23:15.292 "arbitration_burst": 0, 00:23:15.292 "low_priority_weight": 0, 00:23:15.292 "medium_priority_weight": 0, 00:23:15.292 "high_priority_weight": 0, 00:23:15.292 "nvme_adminq_poll_period_us": 10000, 00:23:15.292 "nvme_ioq_poll_period_us": 0, 00:23:15.292 "io_queue_requests": 0, 00:23:15.293 "delay_cmd_submit": true, 00:23:15.293 "transport_retry_count": 4, 00:23:15.293 "bdev_retry_count": 3, 00:23:15.293 "transport_ack_timeout": 0, 00:23:15.293 "ctrlr_loss_timeout_sec": 0, 00:23:15.293 "reconnect_delay_sec": 0, 00:23:15.293 "fast_io_fail_timeout_sec": 0, 00:23:15.293 "disable_auto_failback": false, 00:23:15.293 "generate_uuids": false, 00:23:15.293 "transport_tos": 0, 00:23:15.293 "nvme_error_stat": false, 00:23:15.293 "rdma_srq_size": 0, 00:23:15.293 "io_path_stat": false, 00:23:15.293 "allow_accel_sequence": false, 00:23:15.293 "rdma_max_cq_size": 0, 00:23:15.293 "rdma_cm_event_timeout_ms": 0, 00:23:15.293 "dhchap_digests": [ 00:23:15.293 "sha256", 00:23:15.293 "sha384", 00:23:15.293 "sha512" 00:23:15.293 ], 00:23:15.293 "dhchap_dhgroups": [ 00:23:15.293 "null", 00:23:15.293 "ffdhe2048", 00:23:15.293 "ffdhe3072", 00:23:15.293 "ffdhe4096", 00:23:15.293 "ffdhe6144", 00:23:15.293 "ffdhe8192" 00:23:15.293 ] 00:23:15.293 } 00:23:15.293 }, 00:23:15.293 { 00:23:15.293 "method": "bdev_nvme_set_hotplug", 00:23:15.293 "params": { 00:23:15.293 "period_us": 100000, 00:23:15.293 "enable": false 00:23:15.293 } 00:23:15.293 }, 00:23:15.293 { 00:23:15.293 "method": "bdev_malloc_create", 00:23:15.293 "params": { 00:23:15.293 "name": "malloc0", 00:23:15.293 "num_blocks": 8192, 00:23:15.293 "block_size": 4096, 00:23:15.293 "physical_block_size": 4096, 00:23:15.293 "uuid": "cc1491f2-f847-44eb-8dc6-7c6d6c3baab2", 00:23:15.293 "optimal_io_boundary": 0 00:23:15.293 } 00:23:15.293 }, 00:23:15.293 { 00:23:15.293 "method": "bdev_wait_for_examine" 00:23:15.293 } 00:23:15.293 ] 00:23:15.293 }, 00:23:15.293 { 00:23:15.293 "subsystem": "nbd", 00:23:15.293 "config": [] 00:23:15.293 }, 00:23:15.293 { 00:23:15.293 "subsystem": "scheduler", 00:23:15.293 "config": [ 00:23:15.293 { 00:23:15.293 "method": "framework_set_scheduler", 00:23:15.293 "params": { 00:23:15.293 "name": "static" 00:23:15.293 } 00:23:15.293 } 00:23:15.293 ] 00:23:15.293 }, 00:23:15.293 { 00:23:15.293 "subsystem": "nvmf", 00:23:15.293 "config": [ 00:23:15.293 { 00:23:15.293 "method": "nvmf_set_config", 00:23:15.293 "params": { 00:23:15.293 "discovery_filter": "match_any", 00:23:15.293 "admin_cmd_passthru": { 00:23:15.293 "identify_ctrlr": false 00:23:15.293 } 00:23:15.293 } 00:23:15.293 }, 00:23:15.293 { 00:23:15.293 "method": "nvmf_set_max_subsystems", 00:23:15.293 "params": { 00:23:15.293 "max_subsystems": 1024 00:23:15.293 } 00:23:15.293 }, 00:23:15.293 { 00:23:15.293 "method": "nvmf_set_crdt", 00:23:15.293 "params": { 00:23:15.293 "crdt1": 0, 00:23:15.293 "crdt2": 0, 00:23:15.293 "crdt3": 0 00:23:15.293 } 00:23:15.293 }, 00:23:15.293 { 00:23:15.293 "method": "nvmf_create_transport", 00:23:15.293 "params": { 00:23:15.293 "trtype": "TCP", 00:23:15.293 "max_queue_depth": 128, 00:23:15.293 "max_io_qpairs_per_ctrlr": 127, 00:23:15.293 "in_capsule_data_size": 4096, 00:23:15.293 "max_io_size": 131072, 00:23:15.293 "io_unit_size": 131072, 00:23:15.293 "max_aq_depth": 128, 00:23:15.293 "num_shared_buffers": 511, 00:23:15.293 "buf_cache_size": 4294967295, 00:23:15.293 "dif_insert_or_strip": false, 00:23:15.293 "zcopy": false, 00:23:15.293 "c2h_success": false, 00:23:15.293 "sock_priority": 0, 00:23:15.293 "abort_timeout_sec": 1, 00:23:15.293 "ack_timeout": 0, 00:23:15.293 "data_wr_pool_size": 0 00:23:15.293 } 00:23:15.293 }, 00:23:15.293 { 00:23:15.293 "method": "nvmf_create_subsystem", 00:23:15.293 "params": { 00:23:15.293 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.293 "allow_any_host": false, 00:23:15.293 "serial_number": "00000000000000000000", 00:23:15.293 "model_number": "SPDK bdev Controller", 00:23:15.293 "max_namespaces": 32, 00:23:15.293 "min_cntlid": 1, 00:23:15.293 "max_cntlid": 65519, 00:23:15.293 "ana_reporting": false 00:23:15.293 } 00:23:15.293 }, 00:23:15.293 { 00:23:15.293 "method": "nvmf_subsystem_add_host", 00:23:15.293 "params": { 00:23:15.293 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.293 "host": "nqn.2016-06.io.spdk:host1", 00:23:15.293 "psk": "key0" 00:23:15.293 } 00:23:15.293 }, 00:23:15.293 { 00:23:15.293 "method": "nvmf_subsystem_add_ns", 00:23:15.293 "params": { 00:23:15.293 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.293 "namespace": { 00:23:15.293 "nsid": 1, 00:23:15.293 "bdev_name": "malloc0", 00:23:15.293 "nguid": "CC1491F2F84744EB8DC67C6D6C3BAAB2", 00:23:15.293 "uuid": "cc1491f2-f847-44eb-8dc6-7c6d6c3baab2", 00:23:15.293 "no_auto_visible": false 00:23:15.293 } 00:23:15.293 } 00:23:15.293 }, 00:23:15.293 { 00:23:15.293 "method": "nvmf_subsystem_add_listener", 00:23:15.293 "params": { 00:23:15.293 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.293 "listen_address": { 00:23:15.293 "trtype": "TCP", 00:23:15.293 "adrfam": "IPv4", 00:23:15.293 "traddr": "10.0.0.2", 00:23:15.293 "trsvcid": "4420" 00:23:15.293 }, 00:23:15.293 "secure_channel": true 00:23:15.293 } 00:23:15.293 } 00:23:15.293 ] 00:23:15.293 } 00:23:15.293 ] 00:23:15.293 }' 00:23:15.293 17:58:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:15.293 17:58:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.293 17:58:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=991175 00:23:15.293 17:58:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:15.293 17:58:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 991175 00:23:15.293 17:58:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 991175 ']' 00:23:15.293 17:58:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.293 17:58:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:15.293 17:58:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.293 17:58:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:15.293 17:58:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.293 [2024-07-20 17:58:50.076045] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:15.293 [2024-07-20 17:58:50.076148] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.550 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.550 [2024-07-20 17:58:50.144605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.550 [2024-07-20 17:58:50.236901] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.550 [2024-07-20 17:58:50.236955] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.551 [2024-07-20 17:58:50.236972] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.551 [2024-07-20 17:58:50.236985] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.551 [2024-07-20 17:58:50.236997] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.551 [2024-07-20 17:58:50.237078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.808 [2024-07-20 17:58:50.471608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.808 [2024-07-20 17:58:50.503621] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.808 [2024-07-20 17:58:50.516011] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=991316 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 991316 /var/tmp/bdevperf.sock 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 991316 ']' 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.373 17:58:51 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:16.373 "subsystems": [ 00:23:16.373 { 00:23:16.373 "subsystem": "keyring", 00:23:16.373 "config": [ 00:23:16.373 { 00:23:16.373 "method": "keyring_file_add_key", 00:23:16.373 "params": { 00:23:16.373 "name": "key0", 00:23:16.373 "path": "/tmp/tmp.cnXFtFLDbA" 00:23:16.373 } 00:23:16.373 } 00:23:16.373 ] 00:23:16.373 }, 00:23:16.373 { 00:23:16.373 "subsystem": "iobuf", 00:23:16.373 "config": [ 00:23:16.373 { 00:23:16.373 "method": "iobuf_set_options", 00:23:16.373 "params": { 00:23:16.374 "small_pool_count": 8192, 00:23:16.374 "large_pool_count": 1024, 00:23:16.374 "small_bufsize": 8192, 00:23:16.374 "large_bufsize": 135168 00:23:16.374 } 00:23:16.374 } 00:23:16.374 ] 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "subsystem": "sock", 00:23:16.374 "config": [ 00:23:16.374 { 00:23:16.374 "method": "sock_set_default_impl", 00:23:16.374 "params": { 00:23:16.374 "impl_name": "posix" 00:23:16.374 } 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "method": "sock_impl_set_options", 00:23:16.374 "params": { 00:23:16.374 "impl_name": "ssl", 00:23:16.374 "recv_buf_size": 4096, 00:23:16.374 "send_buf_size": 4096, 00:23:16.374 "enable_recv_pipe": true, 00:23:16.374 "enable_quickack": false, 00:23:16.374 "enable_placement_id": 0, 00:23:16.374 "enable_zerocopy_send_server": true, 00:23:16.374 "enable_zerocopy_send_client": false, 00:23:16.374 "zerocopy_threshold": 0, 00:23:16.374 "tls_version": 0, 00:23:16.374 "enable_ktls": false 00:23:16.374 } 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "method": "sock_impl_set_options", 00:23:16.374 "params": { 00:23:16.374 "impl_name": "posix", 00:23:16.374 "recv_buf_size": 2097152, 00:23:16.374 "send_buf_size": 2097152, 00:23:16.374 "enable_recv_pipe": true, 00:23:16.374 "enable_quickack": false, 00:23:16.374 "enable_placement_id": 0, 00:23:16.374 "enable_zerocopy_send_server": true, 00:23:16.374 "enable_zerocopy_send_client": false, 00:23:16.374 "zerocopy_threshold": 0, 00:23:16.374 "tls_version": 0, 00:23:16.374 "enable_ktls": false 00:23:16.374 } 00:23:16.374 } 00:23:16.374 ] 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "subsystem": "vmd", 00:23:16.374 "config": [] 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "subsystem": "accel", 00:23:16.374 "config": [ 00:23:16.374 { 00:23:16.374 "method": "accel_set_options", 00:23:16.374 "params": { 00:23:16.374 "small_cache_size": 128, 00:23:16.374 "large_cache_size": 16, 00:23:16.374 "task_count": 2048, 00:23:16.374 "sequence_count": 2048, 00:23:16.374 "buf_count": 2048 00:23:16.374 } 00:23:16.374 } 00:23:16.374 ] 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "subsystem": "bdev", 00:23:16.374 "config": [ 00:23:16.374 { 00:23:16.374 "method": "bdev_set_options", 00:23:16.374 "params": { 00:23:16.374 "bdev_io_pool_size": 65535, 00:23:16.374 "bdev_io_cache_size": 256, 00:23:16.374 "bdev_auto_examine": true, 00:23:16.374 "iobuf_small_cache_size": 128, 00:23:16.374 "iobuf_large_cache_size": 16 00:23:16.374 } 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "method": "bdev_raid_set_options", 00:23:16.374 "params": { 00:23:16.374 "process_window_size_kb": 1024 00:23:16.374 } 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "method": "bdev_iscsi_set_options", 00:23:16.374 "params": { 00:23:16.374 "timeout_sec": 30 00:23:16.374 } 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "method": "bdev_nvme_set_options", 00:23:16.374 "params": { 00:23:16.374 "action_on_timeout": "none", 00:23:16.374 "timeout_us": 0, 00:23:16.374 "timeout_admin_us": 0, 00:23:16.374 "keep_alive_timeout_ms": 10000, 00:23:16.374 "arbitration_burst": 0, 00:23:16.374 "low_priority_weight": 0, 00:23:16.374 "medium_priority_weight": 0, 00:23:16.374 "high_priority_weight": 0, 00:23:16.374 "nvme_adminq_poll_period_us": 10000, 00:23:16.374 "nvme_ioq_poll_period_us": 0, 00:23:16.374 "io_queue_requests": 512, 00:23:16.374 "delay_cmd_submit": true, 00:23:16.374 "transport_retry_count": 4, 00:23:16.374 "bdev_retry_count": 3, 00:23:16.374 "transport_ack_timeout": 0, 00:23:16.374 "ctrlr_loss_timeout_sec": 0, 00:23:16.374 "reconnect_delay_sec": 0, 00:23:16.374 "fast_io_fail_timeout_sec": 0, 00:23:16.374 "disable_auto_failback": false, 00:23:16.374 "generate_uuids": false, 00:23:16.374 "transport_tos": 0, 00:23:16.374 "nvme_error_stat": false, 00:23:16.374 "rdma_srq_size": 0, 00:23:16.374 "io_path_stat": false, 00:23:16.374 "allow_accel_sequence": false, 00:23:16.374 "rdma_max_cq_size": 0, 00:23:16.374 "rdma_cm_event_timeout_ms": 0, 00:23:16.374 "dhchap_digests": [ 00:23:16.374 "sha256", 00:23:16.374 "sha384", 00:23:16.374 "sh 17:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:16.374 a512" 00:23:16.374 ], 00:23:16.374 "dhchap_dhgroups": [ 00:23:16.374 "null", 00:23:16.374 "ffdhe2048", 00:23:16.374 "ffdhe3072", 00:23:16.374 "ffdhe4096", 00:23:16.374 "ffdhe6144", 00:23:16.374 "ffdhe8192" 00:23:16.374 ] 00:23:16.374 } 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "method": "bdev_nvme_attach_controller", 00:23:16.374 "params": { 00:23:16.374 "name": "nvme0", 00:23:16.374 "trtype": "TCP", 00:23:16.374 "adrfam": "IPv4", 00:23:16.374 "traddr": "10.0.0.2", 00:23:16.374 "trsvcid": "4420", 00:23:16.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.374 "prchk_reftag": false, 00:23:16.374 "prchk_guard": false, 00:23:16.374 "ctrlr_loss_timeout_sec": 0, 00:23:16.374 "reconnect_delay_sec": 0, 00:23:16.374 "fast_io_fail_timeout_sec": 0, 00:23:16.374 "psk": "key0", 00:23:16.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.374 "hdgst": false, 00:23:16.374 "ddgst": false 00:23:16.374 } 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "method": "bdev_nvme_set_hotplug", 00:23:16.374 "params": { 00:23:16.374 "period_us": 100000, 00:23:16.374 "enable": false 00:23:16.374 } 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "method": "bdev_enable_histogram", 00:23:16.374 "params": { 00:23:16.374 "name": "nvme0n1", 00:23:16.374 "enable": true 00:23:16.374 } 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "method": "bdev_wait_for_examine" 00:23:16.374 } 00:23:16.374 ] 00:23:16.374 }, 00:23:16.374 { 00:23:16.374 "subsystem": "nbd", 00:23:16.374 "config": [] 00:23:16.374 } 00:23:16.374 ] 00:23:16.374 }' 00:23:16.374 17:58:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.633 [2024-07-20 17:58:51.179858] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:16.633 [2024-07-20 17:58:51.179951] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991316 ] 00:23:16.633 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.633 [2024-07-20 17:58:51.244946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.633 [2024-07-20 17:58:51.337371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.890 [2024-07-20 17:58:51.514264] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.476 17:58:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:17.476 17:58:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:17.476 17:58:52 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:17.476 17:58:52 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:17.733 17:58:52 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.733 17:58:52 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:17.990 Running I/O for 1 seconds... 00:23:18.921 00:23:18.921 Latency(us) 00:23:18.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.921 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:18.921 Verification LBA range: start 0x0 length 0x2000 00:23:18.921 nvme0n1 : 1.12 694.48 2.71 0.00 0.00 177173.95 7233.23 217482.43 00:23:18.921 =================================================================================================================== 00:23:18.921 Total : 694.48 2.71 0.00 0.00 177173.95 7233.23 217482.43 00:23:18.921 0 00:23:18.921 17:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:18.921 17:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:18.921 17:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:18.921 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:18.921 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:18.921 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:18.921 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:18.921 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:18.921 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:18.921 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:18.921 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:18.921 nvmf_trace.0 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 991316 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 991316 ']' 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 991316 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 991316 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 991316' 00:23:19.179 killing process with pid 991316 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 991316 00:23:19.179 Received shutdown signal, test time was about 1.000000 seconds 00:23:19.179 00:23:19.179 Latency(us) 00:23:19.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.179 =================================================================================================================== 00:23:19.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 991316 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.179 17:58:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.179 rmmod nvme_tcp 00:23:19.437 rmmod nvme_fabrics 00:23:19.437 rmmod nvme_keyring 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 991175 ']' 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 991175 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 991175 ']' 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 991175 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 991175 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 991175' 00:23:19.437 killing process with pid 991175 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 991175 00:23:19.437 17:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 991175 00:23:19.695 17:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.695 17:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.695 17:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.695 17:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.695 17:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.695 17:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.695 17:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.695 17:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.617 17:58:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:21.617 17:58:56 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.xYJrmUvYCz /tmp/tmp.aREz7rOQ4a /tmp/tmp.cnXFtFLDbA 00:23:21.617 00:23:21.617 real 1m19.528s 00:23:21.617 user 2m7.812s 00:23:21.617 sys 0m27.734s 00:23:21.617 17:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:21.617 17:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.617 ************************************ 00:23:21.617 END TEST nvmf_tls 00:23:21.617 ************************************ 00:23:21.617 17:58:56 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:21.617 17:58:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:21.618 17:58:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:21.618 17:58:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:21.618 ************************************ 00:23:21.618 START TEST nvmf_fips 00:23:21.618 ************************************ 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:21.618 * Looking for test storage... 00:23:21.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.618 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:21.877 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:21.878 Error setting digest 00:23:21.878 00E23EC4127F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:21.878 00E23EC4127F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:21.878 17:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:23.783 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:23.783 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:23.783 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:23.783 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:23.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:23:23.783 00:23:23.783 --- 10.0.0.2 ping statistics --- 00:23:23.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.783 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:23:23.783 00:23:23.783 --- 10.0.0.1 ping statistics --- 00:23:23.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.783 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=993678 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 993678 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 993678 ']' 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.783 17:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:23.784 17:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:24.043 [2024-07-20 17:58:58.610883] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:24.043 [2024-07-20 17:58:58.610966] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.043 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.043 [2024-07-20 17:58:58.673767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.043 [2024-07-20 17:58:58.760182] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.043 [2024-07-20 17:58:58.760239] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.043 [2024-07-20 17:58:58.760252] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.043 [2024-07-20 17:58:58.760264] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.043 [2024-07-20 17:58:58.760290] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.043 [2024-07-20 17:58:58.760325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:24.301 17:58:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:24.560 [2024-07-20 17:58:59.139365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.560 [2024-07-20 17:58:59.155352] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.560 [2024-07-20 17:58:59.155595] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.560 [2024-07-20 17:58:59.187062] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:24.560 malloc0 00:23:24.560 17:58:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.560 17:58:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=993761 00:23:24.560 17:58:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:24.560 17:58:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 993761 /var/tmp/bdevperf.sock 00:23:24.560 17:58:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 993761 ']' 00:23:24.560 17:58:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.560 17:58:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:24.560 17:58:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.560 17:58:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:24.560 17:58:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:24.560 [2024-07-20 17:58:59.279336] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:24.560 [2024-07-20 17:58:59.279425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993761 ] 00:23:24.560 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.560 [2024-07-20 17:58:59.338494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.818 [2024-07-20 17:58:59.425943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.819 17:58:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:24.819 17:58:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:24.819 17:58:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:25.076 [2024-07-20 17:58:59.748960] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:25.076 [2024-07-20 17:58:59.749100] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:25.076 TLSTESTn1 00:23:25.076 17:58:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:25.333 Running I/O for 10 seconds... 00:23:35.356 00:23:35.356 Latency(us) 00:23:35.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.356 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:35.356 Verification LBA range: start 0x0 length 0x2000 00:23:35.356 TLSTESTn1 : 10.11 950.77 3.71 0.00 0.00 134099.62 6092.42 192627.29 00:23:35.356 =================================================================================================================== 00:23:35.356 Total : 950.77 3.71 0.00 0.00 134099.62 6092.42 192627.29 00:23:35.356 0 00:23:35.356 17:59:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:35.356 17:59:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:35.356 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:23:35.356 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:23:35.356 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:35.356 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:35.356 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:35.356 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:35.356 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:35.356 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:35.356 nvmf_trace.0 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 993761 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 993761 ']' 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 993761 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 993761 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 993761' 00:23:35.614 killing process with pid 993761 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 993761 00:23:35.614 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.614 00:23:35.614 Latency(us) 00:23:35.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.614 =================================================================================================================== 00:23:35.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.614 [2024-07-20 17:59:10.179288] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 993761 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.614 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.614 rmmod nvme_tcp 00:23:35.614 rmmod nvme_fabrics 00:23:35.614 rmmod nvme_keyring 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 993678 ']' 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 993678 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 993678 ']' 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 993678 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 993678 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 993678' 00:23:35.871 killing process with pid 993678 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 993678 00:23:35.871 [2024-07-20 17:59:10.444629] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:35.871 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 993678 00:23:36.130 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.130 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.130 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.130 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.130 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.130 17:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.130 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.130 17:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:38.028 00:23:38.028 real 0m16.368s 00:23:38.028 user 0m20.099s 00:23:38.028 sys 0m6.482s 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:38.028 ************************************ 00:23:38.028 END TEST nvmf_fips 00:23:38.028 ************************************ 00:23:38.028 17:59:12 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:38.028 17:59:12 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:38.028 17:59:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:38.028 17:59:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:38.028 17:59:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.028 ************************************ 00:23:38.028 START TEST nvmf_fuzz 00:23:38.028 ************************************ 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:38.028 * Looking for test storage... 00:23:38.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.028 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.286 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.287 17:59:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:40.187 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:40.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:40.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:40.188 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:40.188 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:40.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:23:40.188 00:23:40.188 --- 10.0.0.2 ping statistics --- 00:23:40.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.188 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:23:40.188 00:23:40.188 --- 10.0.0.1 ping statistics --- 00:23:40.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.188 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=996951 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 996951 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 996951 ']' 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:40.188 17:59:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:40.446 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:40.446 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:23:40.446 17:59:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:40.446 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.446 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:40.446 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.446 17:59:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:40.446 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.446 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:40.705 Malloc0 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:40.705 17:59:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:12.767 Fuzzing completed. Shutting down the fuzz application 00:24:12.767 00:24:12.767 Dumping successful admin opcodes: 00:24:12.767 8, 9, 10, 24, 00:24:12.767 Dumping successful io opcodes: 00:24:12.767 0, 9, 00:24:12.767 NS: 0x200003aeff00 I/O qp, Total commands completed: 427193, total successful commands: 2498, random_seed: 4181878400 00:24:12.767 NS: 0x200003aeff00 admin qp, Total commands completed: 53200, total successful commands: 429, random_seed: 530069376 00:24:12.767 17:59:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:12.767 Fuzzing completed. Shutting down the fuzz application 00:24:12.767 00:24:12.767 Dumping successful admin opcodes: 00:24:12.767 24, 00:24:12.767 Dumping successful io opcodes: 00:24:12.767 00:24:12.767 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1279082992 00:24:12.767 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1279215216 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.767 rmmod nvme_tcp 00:24:12.767 rmmod nvme_fabrics 00:24:12.767 rmmod nvme_keyring 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 996951 ']' 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 996951 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 996951 ']' 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 996951 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 996951 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 996951' 00:24:12.767 killing process with pid 996951 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 996951 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 996951 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.767 17:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.296 17:59:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:15.296 17:59:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:15.296 00:24:15.296 real 0m36.794s 00:24:15.296 user 0m50.535s 00:24:15.296 sys 0m15.900s 00:24:15.296 17:59:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:15.296 17:59:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:15.296 ************************************ 00:24:15.296 END TEST nvmf_fuzz 00:24:15.296 ************************************ 00:24:15.296 17:59:49 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:15.296 17:59:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:15.296 17:59:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:15.296 17:59:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:15.296 ************************************ 00:24:15.296 START TEST nvmf_multiconnection 00:24:15.296 ************************************ 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:15.296 * Looking for test storage... 00:24:15.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:15.296 17:59:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:17.196 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:17.196 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:17.196 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:17.196 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:17.196 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:17.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:24:17.197 00:24:17.197 --- 10.0.0.2 ping statistics --- 00:24:17.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.197 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:24:17.197 00:24:17.197 --- 10.0.0.1 ping statistics --- 00:24:17.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.197 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1002669 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1002669 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 1002669 ']' 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:17.197 17:59:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.197 [2024-07-20 17:59:51.857486] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:17.197 [2024-07-20 17:59:51.857583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.197 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.197 [2024-07-20 17:59:51.926984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.455 [2024-07-20 17:59:52.019959] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.455 [2024-07-20 17:59:52.020010] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.455 [2024-07-20 17:59:52.020035] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.455 [2024-07-20 17:59:52.020046] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.455 [2024-07-20 17:59:52.020059] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.455 [2024-07-20 17:59:52.020384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.455 [2024-07-20 17:59:52.020451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.455 [2024-07-20 17:59:52.020522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.455 [2024-07-20 17:59:52.020524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.455 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:17.455 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.456 [2024-07-20 17:59:52.174597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.456 Malloc1 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.456 [2024-07-20 17:59:52.232372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.456 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 Malloc2 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 Malloc3 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 Malloc4 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 Malloc5 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.714 Malloc6 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:17.714 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.715 Malloc7 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.715 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.973 Malloc8 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.973 Malloc9 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.973 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.974 Malloc10 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.974 Malloc11 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.974 17:59:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:18.587 17:59:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:18.587 17:59:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:18.587 17:59:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:18.587 17:59:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:18.587 17:59:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:21.109 17:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:21.109 17:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:21.109 17:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:21.109 17:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:21.109 17:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:21.109 17:59:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:21.109 17:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.109 17:59:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:21.366 17:59:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:21.366 17:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:21.366 17:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.366 17:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:21.366 17:59:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:23.264 17:59:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:23.264 17:59:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:23.264 17:59:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:23.264 17:59:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:23.264 17:59:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.264 17:59:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:23.264 17:59:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.264 17:59:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:23.830 17:59:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:23.830 17:59:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:23.830 17:59:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:23.830 17:59:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:23.830 17:59:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:26.354 18:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:26.354 18:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:26.354 18:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:26.354 18:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:26.354 18:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:26.354 18:00:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:26.354 18:00:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:26.354 18:00:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:26.610 18:00:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:26.610 18:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:26.610 18:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:26.610 18:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:26.610 18:00:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:28.512 18:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:28.512 18:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:28.513 18:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:28.513 18:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:28.513 18:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:28.513 18:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:28.513 18:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:28.513 18:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:29.443 18:00:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:29.443 18:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:29.443 18:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.443 18:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:29.443 18:00:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:31.335 18:00:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:31.335 18:00:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:31.335 18:00:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:31.335 18:00:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:31.335 18:00:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:31.335 18:00:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:31.335 18:00:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.335 18:00:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:32.266 18:00:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:32.266 18:00:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:32.266 18:00:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:32.266 18:00:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:32.266 18:00:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:34.184 18:00:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:34.184 18:00:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:34.184 18:00:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:24:34.184 18:00:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:34.184 18:00:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:34.184 18:00:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:34.184 18:00:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.184 18:00:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:35.116 18:00:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:35.116 18:00:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:35.116 18:00:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:35.116 18:00:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:35.116 18:00:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:37.081 18:00:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:37.081 18:00:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:37.081 18:00:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:24:37.081 18:00:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:37.081 18:00:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:37.081 18:00:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:37.081 18:00:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.081 18:00:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:37.645 18:00:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:37.645 18:00:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:37.646 18:00:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:37.646 18:00:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:37.646 18:00:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:40.168 18:00:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:40.168 18:00:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:40.168 18:00:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:24:40.168 18:00:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:40.168 18:00:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:40.168 18:00:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:40.168 18:00:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.168 18:00:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:40.732 18:00:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:40.732 18:00:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:40.732 18:00:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:40.732 18:00:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:40.732 18:00:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:42.625 18:00:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:42.625 18:00:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:42.625 18:00:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:24:42.625 18:00:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:42.625 18:00:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:42.625 18:00:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:42.625 18:00:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.625 18:00:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:43.556 18:00:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:43.556 18:00:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:43.556 18:00:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:43.556 18:00:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:43.556 18:00:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:45.451 18:00:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:45.451 18:00:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:45.451 18:00:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:24:45.451 18:00:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:45.451 18:00:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:45.451 18:00:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:45.451 18:00:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:45.451 18:00:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:46.386 18:00:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:46.386 18:00:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:46.386 18:00:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:46.386 18:00:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:46.386 18:00:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:48.283 18:00:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:48.283 18:00:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:48.283 18:00:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:24:48.283 18:00:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:48.283 18:00:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:48.283 18:00:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:48.283 18:00:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:48.283 [global] 00:24:48.283 thread=1 00:24:48.283 invalidate=1 00:24:48.283 rw=read 00:24:48.283 time_based=1 00:24:48.283 runtime=10 00:24:48.283 ioengine=libaio 00:24:48.283 direct=1 00:24:48.283 bs=262144 00:24:48.283 iodepth=64 00:24:48.283 norandommap=1 00:24:48.283 numjobs=1 00:24:48.283 00:24:48.283 [job0] 00:24:48.283 filename=/dev/nvme0n1 00:24:48.283 [job1] 00:24:48.283 filename=/dev/nvme10n1 00:24:48.283 [job2] 00:24:48.283 filename=/dev/nvme1n1 00:24:48.283 [job3] 00:24:48.283 filename=/dev/nvme2n1 00:24:48.283 [job4] 00:24:48.283 filename=/dev/nvme3n1 00:24:48.283 [job5] 00:24:48.283 filename=/dev/nvme4n1 00:24:48.283 [job6] 00:24:48.283 filename=/dev/nvme5n1 00:24:48.283 [job7] 00:24:48.283 filename=/dev/nvme6n1 00:24:48.283 [job8] 00:24:48.283 filename=/dev/nvme7n1 00:24:48.283 [job9] 00:24:48.283 filename=/dev/nvme8n1 00:24:48.283 [job10] 00:24:48.283 filename=/dev/nvme9n1 00:24:48.283 Could not set queue depth (nvme0n1) 00:24:48.283 Could not set queue depth (nvme10n1) 00:24:48.283 Could not set queue depth (nvme1n1) 00:24:48.283 Could not set queue depth (nvme2n1) 00:24:48.283 Could not set queue depth (nvme3n1) 00:24:48.283 Could not set queue depth (nvme4n1) 00:24:48.283 Could not set queue depth (nvme5n1) 00:24:48.283 Could not set queue depth (nvme6n1) 00:24:48.283 Could not set queue depth (nvme7n1) 00:24:48.283 Could not set queue depth (nvme8n1) 00:24:48.283 Could not set queue depth (nvme9n1) 00:24:48.540 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:48.540 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:48.540 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:48.540 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:48.540 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:48.540 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:48.540 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:48.540 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:48.540 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:48.540 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:48.540 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:48.540 fio-3.35 00:24:48.540 Starting 11 threads 00:25:00.754 00:25:00.754 job0: (groupid=0, jobs=1): err= 0: pid=1007400: Sat Jul 20 18:00:34 2024 00:25:00.754 read: IOPS=1050, BW=263MiB/s (275MB/s)(2634MiB/10028msec) 00:25:00.754 slat (usec): min=13, max=49382, avg=930.78, stdev=2346.91 00:25:00.754 clat (msec): min=19, max=346, avg=59.93, stdev=23.23 00:25:00.754 lat (msec): min=19, max=346, avg=60.86, stdev=23.57 00:25:00.754 clat percentiles (msec): 00:25:00.754 | 1.00th=[ 41], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 48], 00:25:00.754 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 59], 00:25:00.754 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 73], 95.00th=[ 85], 00:25:00.754 | 99.00th=[ 176], 99.50th=[ 251], 99.90th=[ 309], 99.95th=[ 317], 00:25:00.754 | 99.99th=[ 330] 00:25:00.754 bw ( KiB/s): min=99840, max=351232, per=19.36%, avg=268118.95, stdev=59507.25, samples=20 00:25:00.754 iops : min= 390, max= 1372, avg=1047.25, stdev=232.36, samples=20 00:25:00.754 lat (msec) : 20=0.01%, 50=26.70%, 100=70.61%, 250=2.17%, 500=0.50% 00:25:00.754 cpu : usr=0.72%, sys=3.37%, ctx=2112, majf=0, minf=4097 00:25:00.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:00.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.754 issued rwts: total=10535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.754 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.754 job1: (groupid=0, jobs=1): err= 0: pid=1007401: Sat Jul 20 18:00:34 2024 00:25:00.754 read: IOPS=400, BW=100MiB/s (105MB/s)(1023MiB/10217msec) 00:25:00.754 slat (usec): min=14, max=831140, avg=2333.38, stdev=24232.53 00:25:00.754 clat (msec): min=11, max=2597, avg=157.36, stdev=352.47 00:25:00.754 lat (msec): min=12, max=2684, avg=159.69, stdev=357.63 00:25:00.754 clat percentiles (msec): 00:25:00.754 | 1.00th=[ 29], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 57], 00:25:00.754 | 30.00th=[ 60], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 78], 00:25:00.754 | 70.00th=[ 83], 80.00th=[ 100], 90.00th=[ 186], 95.00th=[ 232], 00:25:00.754 | 99.00th=[ 2022], 99.50th=[ 2106], 99.90th=[ 2299], 99.95th=[ 2299], 00:25:00.754 | 99.99th=[ 2601] 00:25:00.754 bw ( KiB/s): min= 1536, max=294912, per=7.83%, avg=108475.79, stdev=105685.32, samples=19 00:25:00.754 iops : min= 6, max= 1152, avg=423.68, stdev=412.83, samples=19 00:25:00.754 lat (msec) : 20=0.22%, 50=9.02%, 100=70.86%, 250=15.18%, 500=0.15% 00:25:00.754 lat (msec) : 1000=0.56%, 2000=2.84%, >=2000=1.17% 00:25:00.754 cpu : usr=0.20%, sys=1.48%, ctx=951, majf=0, minf=4097 00:25:00.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:00.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.754 issued rwts: total=4090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.754 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.754 job2: (groupid=0, jobs=1): err= 0: pid=1007408: Sat Jul 20 18:00:34 2024 00:25:00.754 read: IOPS=815, BW=204MiB/s (214MB/s)(2044MiB/10032msec) 00:25:00.754 slat (usec): min=12, max=212864, avg=1094.73, stdev=4849.29 00:25:00.754 clat (msec): min=11, max=504, avg=77.35, stdev=51.84 00:25:00.754 lat (msec): min=11, max=621, avg=78.44, stdev=52.53 00:25:00.754 clat percentiles (msec): 00:25:00.754 | 1.00th=[ 27], 5.00th=[ 49], 10.00th=[ 54], 20.00th=[ 58], 00:25:00.754 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 67], 00:25:00.754 | 70.00th=[ 71], 80.00th=[ 81], 90.00th=[ 104], 95.00th=[ 153], 00:25:00.754 | 99.00th=[ 393], 99.50th=[ 418], 99.90th=[ 439], 99.95th=[ 439], 00:25:00.754 | 99.99th=[ 506] 00:25:00.754 bw ( KiB/s): min=39936, max=283648, per=15.00%, avg=207681.05, stdev=66604.26, samples=20 00:25:00.754 iops : min= 156, max= 1108, avg=811.25, stdev=260.18, samples=20 00:25:00.754 lat (msec) : 20=0.50%, 50=5.28%, 100=83.11%, 250=8.71%, 500=2.37% 00:25:00.754 lat (msec) : 750=0.02% 00:25:00.754 cpu : usr=0.66%, sys=2.71%, ctx=1764, majf=0, minf=3721 00:25:00.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:00.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.754 issued rwts: total=8177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.754 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.754 job3: (groupid=0, jobs=1): err= 0: pid=1007409: Sat Jul 20 18:00:34 2024 00:25:00.754 read: IOPS=219, BW=54.8MiB/s (57.5MB/s)(559MiB/10197msec) 00:25:00.754 slat (usec): min=9, max=857577, avg=3701.03, stdev=26606.50 00:25:00.754 clat (msec): min=4, max=2174, avg=287.72, stdev=432.62 00:25:00.754 lat (msec): min=4, max=2438, avg=291.42, stdev=439.15 00:25:00.754 clat percentiles (msec): 00:25:00.754 | 1.00th=[ 23], 5.00th=[ 79], 10.00th=[ 100], 20.00th=[ 131], 00:25:00.754 | 30.00th=[ 150], 40.00th=[ 157], 50.00th=[ 167], 60.00th=[ 176], 00:25:00.754 | 70.00th=[ 186], 80.00th=[ 199], 90.00th=[ 334], 95.00th=[ 1703], 00:25:00.754 | 99.00th=[ 2039], 99.50th=[ 2039], 99.90th=[ 2165], 99.95th=[ 2165], 00:25:00.754 | 99.99th=[ 2165] 00:25:00.754 bw ( KiB/s): min= 6144, max=124928, per=4.23%, avg=58555.95, stdev=47756.56, samples=19 00:25:00.754 iops : min= 24, max= 488, avg=228.68, stdev=186.51, samples=19 00:25:00.754 lat (msec) : 10=0.09%, 20=0.49%, 50=1.12%, 100=8.63%, 250=76.80% 00:25:00.754 lat (msec) : 500=3.58%, 750=1.12%, 1000=1.07%, 2000=5.63%, >=2000=1.48% 00:25:00.754 cpu : usr=0.16%, sys=0.79%, ctx=738, majf=0, minf=4097 00:25:00.754 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:25:00.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.754 issued rwts: total=2237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.754 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.754 job4: (groupid=0, jobs=1): err= 0: pid=1007413: Sat Jul 20 18:00:34 2024 00:25:00.754 read: IOPS=243, BW=60.8MiB/s (63.8MB/s)(622MiB/10219msec) 00:25:00.754 slat (usec): min=11, max=576029, avg=3729.67, stdev=22101.19 00:25:00.754 clat (msec): min=5, max=2213, avg=259.09, stdev=433.35 00:25:00.754 lat (msec): min=5, max=2269, avg=262.82, stdev=439.72 00:25:00.754 clat percentiles (msec): 00:25:00.754 | 1.00th=[ 20], 5.00th=[ 45], 10.00th=[ 64], 20.00th=[ 87], 00:25:00.754 | 30.00th=[ 101], 40.00th=[ 113], 50.00th=[ 122], 60.00th=[ 134], 00:25:00.754 | 70.00th=[ 153], 80.00th=[ 239], 90.00th=[ 414], 95.00th=[ 1687], 00:25:00.754 | 99.00th=[ 2039], 99.50th=[ 2123], 99.90th=[ 2198], 99.95th=[ 2198], 00:25:00.754 | 99.99th=[ 2198] 00:25:00.754 bw ( KiB/s): min= 1536, max=164864, per=4.48%, avg=61979.75, stdev=62299.53, samples=20 00:25:00.754 iops : min= 6, max= 644, avg=242.05, stdev=243.29, samples=20 00:25:00.754 lat (msec) : 10=0.24%, 20=1.09%, 50=5.95%, 100=22.53%, 250=50.76% 00:25:00.754 lat (msec) : 500=11.14%, 750=1.21%, 1000=0.72%, 2000=3.98%, >=2000=2.37% 00:25:00.754 cpu : usr=0.15%, sys=0.90%, ctx=663, majf=0, minf=4097 00:25:00.754 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:25:00.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.754 issued rwts: total=2486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.754 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.754 job5: (groupid=0, jobs=1): err= 0: pid=1007444: Sat Jul 20 18:00:34 2024 00:25:00.754 read: IOPS=521, BW=130MiB/s (137MB/s)(1338MiB/10261msec) 00:25:00.754 slat (usec): min=11, max=916914, avg=1597.37, stdev=17209.49 00:25:00.755 clat (msec): min=2, max=1772, avg=121.02, stdev=172.47 00:25:00.755 lat (msec): min=2, max=1772, avg=122.61, stdev=173.41 00:25:00.755 clat percentiles (msec): 00:25:00.755 | 1.00th=[ 10], 5.00th=[ 31], 10.00th=[ 44], 20.00th=[ 54], 00:25:00.755 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 77], 60.00th=[ 90], 00:25:00.755 | 70.00th=[ 113], 80.00th=[ 153], 90.00th=[ 186], 95.00th=[ 266], 00:25:00.755 | 99.00th=[ 1070], 99.50th=[ 1435], 99.90th=[ 1670], 99.95th=[ 1720], 00:25:00.755 | 99.99th=[ 1770] 00:25:00.755 bw ( KiB/s): min=33280, max=269824, per=10.86%, avg=150364.17, stdev=80010.75, samples=18 00:25:00.755 iops : min= 130, max= 1054, avg=587.33, stdev=312.52, samples=18 00:25:00.755 lat (msec) : 4=0.07%, 10=0.97%, 20=2.37%, 50=11.30%, 100=50.71% 00:25:00.755 lat (msec) : 250=29.26%, 500=2.93%, 750=0.02%, 1000=1.31%, 2000=1.05% 00:25:00.755 cpu : usr=0.32%, sys=1.85%, ctx=1307, majf=0, minf=4097 00:25:00.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:00.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.755 issued rwts: total=5352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.755 job6: (groupid=0, jobs=1): err= 0: pid=1007466: Sat Jul 20 18:00:34 2024 00:25:00.755 read: IOPS=465, BW=116MiB/s (122MB/s)(1177MiB/10122msec) 00:25:00.755 slat (usec): min=14, max=110485, avg=2020.51, stdev=5779.23 00:25:00.755 clat (msec): min=2, max=309, avg=135.45, stdev=56.01 00:25:00.755 lat (msec): min=2, max=309, avg=137.47, stdev=56.98 00:25:00.755 clat percentiles (msec): 00:25:00.755 | 1.00th=[ 20], 5.00th=[ 43], 10.00th=[ 51], 20.00th=[ 73], 00:25:00.755 | 30.00th=[ 107], 40.00th=[ 140], 50.00th=[ 148], 60.00th=[ 157], 00:25:00.755 | 70.00th=[ 171], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 211], 00:25:00.755 | 99.00th=[ 257], 99.50th=[ 271], 99.90th=[ 296], 99.95th=[ 309], 00:25:00.755 | 99.99th=[ 309] 00:25:00.755 bw ( KiB/s): min=69771, max=313344, per=8.59%, avg=118890.90, stdev=57231.54, samples=20 00:25:00.755 iops : min= 272, max= 1224, avg=464.35, stdev=223.60, samples=20 00:25:00.755 lat (msec) : 4=0.11%, 10=0.32%, 20=0.59%, 50=8.52%, 100=20.24% 00:25:00.755 lat (msec) : 250=68.90%, 500=1.32% 00:25:00.755 cpu : usr=0.26%, sys=1.74%, ctx=1144, majf=0, minf=4097 00:25:00.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:00.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.755 issued rwts: total=4708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.755 job7: (groupid=0, jobs=1): err= 0: pid=1007488: Sat Jul 20 18:00:34 2024 00:25:00.755 read: IOPS=488, BW=122MiB/s (128MB/s)(1237MiB/10121msec) 00:25:00.755 slat (usec): min=14, max=79329, avg=1843.21, stdev=5371.28 00:25:00.755 clat (msec): min=5, max=263, avg=129.01, stdev=45.56 00:25:00.755 lat (msec): min=5, max=268, avg=130.85, stdev=46.32 00:25:00.755 clat percentiles (msec): 00:25:00.755 | 1.00th=[ 15], 5.00th=[ 48], 10.00th=[ 71], 20.00th=[ 85], 00:25:00.755 | 30.00th=[ 108], 40.00th=[ 124], 50.00th=[ 133], 60.00th=[ 146], 00:25:00.755 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 182], 95.00th=[ 201], 00:25:00.755 | 99.00th=[ 218], 99.50th=[ 232], 99.90th=[ 251], 99.95th=[ 251], 00:25:00.755 | 99.99th=[ 264] 00:25:00.755 bw ( KiB/s): min=82944, max=222275, per=9.02%, avg=124971.90, stdev=37935.99, samples=20 00:25:00.755 iops : min= 324, max= 868, avg=488.15, stdev=148.16, samples=20 00:25:00.755 lat (msec) : 10=0.40%, 20=1.07%, 50=3.82%, 100=22.52%, 250=72.06% 00:25:00.755 lat (msec) : 500=0.12% 00:25:00.755 cpu : usr=0.27%, sys=1.81%, ctx=1209, majf=0, minf=4097 00:25:00.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:00.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.755 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.755 job8: (groupid=0, jobs=1): err= 0: pid=1007550: Sat Jul 20 18:00:34 2024 00:25:00.755 read: IOPS=424, BW=106MiB/s (111MB/s)(1084MiB/10220msec) 00:25:00.755 slat (usec): min=13, max=1431.8k, avg=2113.07, stdev=26624.25 00:25:00.755 clat (msec): min=4, max=2165, avg=148.59, stdev=262.29 00:25:00.755 lat (msec): min=4, max=3062, avg=150.70, stdev=266.04 00:25:00.755 clat percentiles (msec): 00:25:00.755 | 1.00th=[ 28], 5.00th=[ 44], 10.00th=[ 51], 20.00th=[ 56], 00:25:00.755 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 106], 60.00th=[ 127], 00:25:00.755 | 70.00th=[ 136], 80.00th=[ 153], 90.00th=[ 190], 95.00th=[ 220], 00:25:00.755 | 99.00th=[ 1737], 99.50th=[ 1871], 99.90th=[ 2140], 99.95th=[ 2165], 00:25:00.755 | 99.99th=[ 2165] 00:25:00.755 bw ( KiB/s): min= 2048, max=293888, per=8.77%, avg=121509.83, stdev=80769.48, samples=18 00:25:00.755 iops : min= 8, max= 1148, avg=474.56, stdev=315.49, samples=18 00:25:00.755 lat (msec) : 10=0.12%, 20=0.14%, 50=9.80%, 100=38.21%, 250=48.73% 00:25:00.755 lat (msec) : 500=0.09%, 1000=0.07%, 2000=2.51%, >=2000=0.32% 00:25:00.755 cpu : usr=0.31%, sys=1.41%, ctx=1124, majf=0, minf=4097 00:25:00.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:00.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.755 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.755 job9: (groupid=0, jobs=1): err= 0: pid=1007558: Sat Jul 20 18:00:34 2024 00:25:00.755 read: IOPS=422, BW=106MiB/s (111MB/s)(1067MiB/10105msec) 00:25:00.755 slat (usec): min=14, max=87590, avg=2172.23, stdev=5546.57 00:25:00.755 clat (msec): min=39, max=270, avg=149.15, stdev=27.12 00:25:00.755 lat (msec): min=39, max=282, avg=151.32, stdev=27.52 00:25:00.755 clat percentiles (msec): 00:25:00.755 | 1.00th=[ 89], 5.00th=[ 105], 10.00th=[ 120], 20.00th=[ 128], 00:25:00.755 | 30.00th=[ 136], 40.00th=[ 144], 50.00th=[ 148], 60.00th=[ 155], 00:25:00.755 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 186], 95.00th=[ 197], 00:25:00.755 | 99.00th=[ 226], 99.50th=[ 236], 99.90th=[ 259], 99.95th=[ 264], 00:25:00.755 | 99.99th=[ 271] 00:25:00.755 bw ( KiB/s): min=68096, max=153600, per=7.77%, avg=107654.60, stdev=18716.05, samples=20 00:25:00.755 iops : min= 266, max= 600, avg=420.50, stdev=73.13, samples=20 00:25:00.755 lat (msec) : 50=0.12%, 100=3.70%, 250=96.04%, 500=0.14% 00:25:00.755 cpu : usr=0.42%, sys=1.44%, ctx=999, majf=0, minf=4097 00:25:00.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:00.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.755 issued rwts: total=4269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.755 job10: (groupid=0, jobs=1): err= 0: pid=1007564: Sat Jul 20 18:00:34 2024 00:25:00.755 read: IOPS=435, BW=109MiB/s (114MB/s)(1092MiB/10031msec) 00:25:00.755 slat (usec): min=14, max=137837, avg=2196.32, stdev=7294.86 00:25:00.755 clat (msec): min=6, max=492, avg=144.64, stdev=65.33 00:25:00.755 lat (msec): min=6, max=492, avg=146.84, stdev=66.45 00:25:00.755 clat percentiles (msec): 00:25:00.755 | 1.00th=[ 24], 5.00th=[ 56], 10.00th=[ 66], 20.00th=[ 95], 00:25:00.755 | 30.00th=[ 116], 40.00th=[ 129], 50.00th=[ 140], 60.00th=[ 157], 00:25:00.755 | 70.00th=[ 171], 80.00th=[ 182], 90.00th=[ 205], 95.00th=[ 249], 00:25:00.755 | 99.00th=[ 393], 99.50th=[ 401], 99.90th=[ 418], 99.95th=[ 422], 00:25:00.755 | 99.99th=[ 493] 00:25:00.755 bw ( KiB/s): min=43520, max=223232, per=7.96%, avg=110197.10, stdev=40723.87, samples=20 00:25:00.755 iops : min= 170, max= 872, avg=430.40, stdev=159.12, samples=20 00:25:00.755 lat (msec) : 10=0.18%, 20=0.41%, 50=2.91%, 100=19.12%, 250=72.46% 00:25:00.755 lat (msec) : 500=4.92% 00:25:00.755 cpu : usr=0.25%, sys=1.63%, ctx=1024, majf=0, minf=4097 00:25:00.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:00.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.755 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.755 00:25:00.755 Run status group 0 (all jobs): 00:25:00.755 READ: bw=1352MiB/s (1418MB/s), 54.8MiB/s-263MiB/s (57.5MB/s-275MB/s), io=13.6GiB (14.6GB), run=10028-10261msec 00:25:00.755 00:25:00.755 Disk stats (read/write): 00:25:00.755 nvme0n1: ios=20884/0, merge=0/0, ticks=1236122/0, in_queue=1236122, util=95.14% 00:25:00.755 nvme10n1: ios=8116/0, merge=0/0, ticks=1212577/0, in_queue=1212577, util=95.65% 00:25:00.755 nvme1n1: ios=16135/0, merge=0/0, ticks=1239683/0, in_queue=1239683, util=96.08% 00:25:00.755 nvme2n1: ios=4347/0, merge=0/0, ticks=1112081/0, in_queue=1112081, util=96.34% 00:25:00.755 nvme3n1: ios=4921/0, merge=0/0, ticks=1246521/0, in_queue=1246521, util=96.60% 00:25:00.755 nvme4n1: ios=10576/0, merge=0/0, ticks=1181638/0, in_queue=1181638, util=97.25% 00:25:00.755 nvme5n1: ios=9255/0, merge=0/0, ticks=1230120/0, in_queue=1230120, util=97.49% 00:25:00.755 nvme6n1: ios=9723/0, merge=0/0, ticks=1228289/0, in_queue=1228289, util=97.71% 00:25:00.755 nvme7n1: ios=8564/0, merge=0/0, ticks=1134657/0, in_queue=1134657, util=98.62% 00:25:00.755 nvme8n1: ios=8387/0, merge=0/0, ticks=1231865/0, in_queue=1231865, util=98.95% 00:25:00.755 nvme9n1: ios=8494/0, merge=0/0, ticks=1231343/0, in_queue=1231343, util=99.19% 00:25:00.755 18:00:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:00.755 [global] 00:25:00.755 thread=1 00:25:00.755 invalidate=1 00:25:00.755 rw=randwrite 00:25:00.755 time_based=1 00:25:00.755 runtime=10 00:25:00.755 ioengine=libaio 00:25:00.755 direct=1 00:25:00.755 bs=262144 00:25:00.755 iodepth=64 00:25:00.755 norandommap=1 00:25:00.755 numjobs=1 00:25:00.755 00:25:00.755 [job0] 00:25:00.755 filename=/dev/nvme0n1 00:25:00.755 [job1] 00:25:00.755 filename=/dev/nvme10n1 00:25:00.755 [job2] 00:25:00.755 filename=/dev/nvme1n1 00:25:00.755 [job3] 00:25:00.755 filename=/dev/nvme2n1 00:25:00.755 [job4] 00:25:00.755 filename=/dev/nvme3n1 00:25:00.755 [job5] 00:25:00.755 filename=/dev/nvme4n1 00:25:00.755 [job6] 00:25:00.755 filename=/dev/nvme5n1 00:25:00.755 [job7] 00:25:00.755 filename=/dev/nvme6n1 00:25:00.755 [job8] 00:25:00.755 filename=/dev/nvme7n1 00:25:00.755 [job9] 00:25:00.755 filename=/dev/nvme8n1 00:25:00.755 [job10] 00:25:00.755 filename=/dev/nvme9n1 00:25:00.755 Could not set queue depth (nvme0n1) 00:25:00.755 Could not set queue depth (nvme10n1) 00:25:00.755 Could not set queue depth (nvme1n1) 00:25:00.755 Could not set queue depth (nvme2n1) 00:25:00.756 Could not set queue depth (nvme3n1) 00:25:00.756 Could not set queue depth (nvme4n1) 00:25:00.756 Could not set queue depth (nvme5n1) 00:25:00.756 Could not set queue depth (nvme6n1) 00:25:00.756 Could not set queue depth (nvme7n1) 00:25:00.756 Could not set queue depth (nvme8n1) 00:25:00.756 Could not set queue depth (nvme9n1) 00:25:00.756 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.756 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.756 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.756 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.756 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.756 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.756 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.756 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.756 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.756 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.756 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:00.756 fio-3.35 00:25:00.756 Starting 11 threads 00:25:10.823 00:25:10.823 job0: (groupid=0, jobs=1): err= 0: pid=1008302: Sat Jul 20 18:00:45 2024 00:25:10.823 write: IOPS=295, BW=73.9MiB/s (77.5MB/s)(751MiB/10162msec); 0 zone resets 00:25:10.823 slat (usec): min=23, max=133964, avg=3030.07, stdev=7491.52 00:25:10.823 clat (msec): min=8, max=662, avg=213.43, stdev=118.59 00:25:10.823 lat (msec): min=11, max=662, avg=216.46, stdev=120.19 00:25:10.823 clat percentiles (msec): 00:25:10.823 | 1.00th=[ 20], 5.00th=[ 53], 10.00th=[ 112], 20.00th=[ 140], 00:25:10.823 | 30.00th=[ 157], 40.00th=[ 174], 50.00th=[ 182], 60.00th=[ 192], 00:25:10.823 | 70.00th=[ 220], 80.00th=[ 309], 90.00th=[ 368], 95.00th=[ 439], 00:25:10.823 | 99.00th=[ 617], 99.50th=[ 625], 99.90th=[ 659], 99.95th=[ 659], 00:25:10.823 | 99.99th=[ 659] 00:25:10.823 bw ( KiB/s): min=26624, max=118784, per=10.01%, avg=75273.55, stdev=28958.80, samples=20 00:25:10.823 iops : min= 104, max= 464, avg=294.00, stdev=113.09, samples=20 00:25:10.823 lat (msec) : 10=0.03%, 20=1.00%, 50=3.53%, 100=3.40%, 250=68.10% 00:25:10.823 lat (msec) : 500=19.85%, 750=4.10% 00:25:10.823 cpu : usr=0.66%, sys=0.72%, ctx=1121, majf=0, minf=1 00:25:10.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:10.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.823 issued rwts: total=0,3003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.823 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.823 job1: (groupid=0, jobs=1): err= 0: pid=1008307: Sat Jul 20 18:00:45 2024 00:25:10.823 write: IOPS=194, BW=48.6MiB/s (50.9MB/s)(496MiB/10209msec); 0 zone resets 00:25:10.823 slat (usec): min=17, max=1830.1k, avg=1457.53, stdev=41285.96 00:25:10.823 clat (msec): min=14, max=4561, avg=327.92, stdev=690.78 00:25:10.823 lat (msec): min=14, max=4561, avg=329.37, stdev=691.79 00:25:10.823 clat percentiles (msec): 00:25:10.823 | 1.00th=[ 27], 5.00th=[ 40], 10.00th=[ 62], 20.00th=[ 86], 00:25:10.823 | 30.00th=[ 101], 40.00th=[ 111], 50.00th=[ 118], 60.00th=[ 127], 00:25:10.823 | 70.00th=[ 142], 80.00th=[ 251], 90.00th=[ 542], 95.00th=[ 2123], 00:25:10.823 | 99.00th=[ 3943], 99.50th=[ 3943], 99.90th=[ 4530], 99.95th=[ 4530], 00:25:10.823 | 99.99th=[ 4530] 00:25:10.823 bw ( KiB/s): min= 4608, max=162304, per=8.71%, avg=65498.67, stdev=45840.91, samples=15 00:25:10.823 iops : min= 18, max= 634, avg=255.80, stdev=179.12, samples=15 00:25:10.823 lat (msec) : 20=0.25%, 50=7.11%, 100=22.59%, 250=50.03%, 500=7.87% 00:25:10.823 lat (msec) : 750=5.30%, 2000=1.31%, >=2000=5.55% 00:25:10.823 cpu : usr=0.54%, sys=0.55%, ctx=1624, majf=0, minf=1 00:25:10.823 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:25:10.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.823 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.823 issued rwts: total=0,1983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.823 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.823 job2: (groupid=0, jobs=1): err= 0: pid=1008315: Sat Jul 20 18:00:45 2024 00:25:10.823 write: IOPS=446, BW=112MiB/s (117MB/s)(1128MiB/10098msec); 0 zone resets 00:25:10.823 slat (usec): min=23, max=48709, avg=2210.02, stdev=4369.25 00:25:10.823 clat (msec): min=35, max=365, avg=140.96, stdev=52.95 00:25:10.823 lat (msec): min=35, max=365, avg=143.17, stdev=53.57 00:25:10.823 clat percentiles (msec): 00:25:10.823 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 104], 20.00th=[ 108], 00:25:10.823 | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 116], 60.00th=[ 124], 00:25:10.823 | 70.00th=[ 140], 80.00th=[ 171], 90.00th=[ 232], 95.00th=[ 255], 00:25:10.823 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 363], 99.95th=[ 368], 00:25:10.823 | 99.99th=[ 368] 00:25:10.823 bw ( KiB/s): min=45056, max=149504, per=15.15%, avg=113894.40, stdev=34216.60, samples=20 00:25:10.823 iops : min= 176, max= 584, avg=444.90, stdev=133.66, samples=20 00:25:10.823 lat (msec) : 50=0.09%, 100=4.08%, 250=90.34%, 500=5.50% 00:25:10.823 cpu : usr=1.15%, sys=1.33%, ctx=1178, majf=0, minf=1 00:25:10.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:10.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.823 issued rwts: total=0,4512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.823 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.823 job3: (groupid=0, jobs=1): err= 0: pid=1008316: Sat Jul 20 18:00:45 2024 00:25:10.823 write: IOPS=327, BW=82.0MiB/s (86.0MB/s)(839MiB/10235msec); 0 zone resets 00:25:10.823 slat (usec): min=25, max=120465, avg=2903.59, stdev=6333.49 00:25:10.823 clat (msec): min=4, max=490, avg=192.04, stdev=82.44 00:25:10.823 lat (msec): min=4, max=490, avg=194.94, stdev=83.48 00:25:10.823 clat percentiles (msec): 00:25:10.823 | 1.00th=[ 28], 5.00th=[ 99], 10.00th=[ 105], 20.00th=[ 113], 00:25:10.823 | 30.00th=[ 124], 40.00th=[ 140], 50.00th=[ 174], 60.00th=[ 241], 00:25:10.823 | 70.00th=[ 257], 80.00th=[ 268], 90.00th=[ 292], 95.00th=[ 321], 00:25:10.823 | 99.00th=[ 363], 99.50th=[ 426], 99.90th=[ 477], 99.95th=[ 489], 00:25:10.823 | 99.99th=[ 489] 00:25:10.823 bw ( KiB/s): min=49152, max=163328, per=11.22%, avg=84325.40, stdev=36659.48, samples=20 00:25:10.823 iops : min= 192, max= 638, avg=329.35, stdev=143.24, samples=20 00:25:10.823 lat (msec) : 10=0.18%, 20=0.42%, 50=1.55%, 100=3.87%, 250=59.82% 00:25:10.823 lat (msec) : 500=34.17% 00:25:10.823 cpu : usr=0.89%, sys=0.94%, ctx=1016, majf=0, minf=1 00:25:10.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:10.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.823 issued rwts: total=0,3357,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.823 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.823 job4: (groupid=0, jobs=1): err= 0: pid=1008323: Sat Jul 20 18:00:45 2024 00:25:10.823 write: IOPS=287, BW=71.8MiB/s (75.3MB/s)(732MiB/10203msec); 0 zone resets 00:25:10.823 slat (usec): min=20, max=989149, avg=3026.96, stdev=23304.98 00:25:10.823 clat (msec): min=5, max=1123, avg=219.75, stdev=183.46 00:25:10.823 lat (msec): min=5, max=1123, avg=222.77, stdev=185.07 00:25:10.823 clat percentiles (msec): 00:25:10.823 | 1.00th=[ 24], 5.00th=[ 54], 10.00th=[ 100], 20.00th=[ 127], 00:25:10.823 | 30.00th=[ 140], 40.00th=[ 148], 50.00th=[ 155], 60.00th=[ 167], 00:25:10.823 | 70.00th=[ 188], 80.00th=[ 300], 90.00th=[ 430], 95.00th=[ 592], 00:25:10.824 | 99.00th=[ 1053], 99.50th=[ 1083], 99.90th=[ 1116], 99.95th=[ 1116], 00:25:10.824 | 99.99th=[ 1116] 00:25:10.824 bw ( KiB/s): min=26059, max=118784, per=10.28%, avg=77234.84, stdev=29290.71, samples=19 00:25:10.824 iops : min= 101, max= 464, avg=301.63, stdev=114.51, samples=19 00:25:10.824 lat (msec) : 10=0.14%, 20=0.55%, 50=3.48%, 100=5.91%, 250=66.10% 00:25:10.824 lat (msec) : 500=17.21%, 750=4.27%, 1000=0.44%, 2000=1.91% 00:25:10.824 cpu : usr=0.81%, sys=0.75%, ctx=1070, majf=0, minf=1 00:25:10.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:25:10.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.824 issued rwts: total=0,2929,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.824 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.824 job5: (groupid=0, jobs=1): err= 0: pid=1008324: Sat Jul 20 18:00:45 2024 00:25:10.824 write: IOPS=345, BW=86.3MiB/s (90.5MB/s)(878MiB/10172msec); 0 zone resets 00:25:10.824 slat (usec): min=16, max=948970, avg=1804.12, stdev=16606.85 00:25:10.824 clat (msec): min=3, max=2430, avg=183.52, stdev=182.84 00:25:10.824 lat (msec): min=7, max=2437, avg=185.33, stdev=184.46 00:25:10.824 clat percentiles (msec): 00:25:10.824 | 1.00th=[ 16], 5.00th=[ 57], 10.00th=[ 93], 20.00th=[ 118], 00:25:10.824 | 30.00th=[ 131], 40.00th=[ 140], 50.00th=[ 148], 60.00th=[ 157], 00:25:10.824 | 70.00th=[ 169], 80.00th=[ 190], 90.00th=[ 284], 95.00th=[ 355], 00:25:10.824 | 99.00th=[ 1250], 99.50th=[ 1267], 99.90th=[ 2433], 99.95th=[ 2433], 00:25:10.824 | 99.99th=[ 2433] 00:25:10.824 bw ( KiB/s): min=29184, max=134387, per=12.36%, avg=92911.53, stdev=29129.72, samples=19 00:25:10.824 iops : min= 114, max= 524, avg=362.84, stdev=113.69, samples=19 00:25:10.824 lat (msec) : 4=0.03%, 10=0.31%, 20=1.11%, 50=2.93%, 100=7.32% 00:25:10.824 lat (msec) : 250=75.19%, 500=11.16%, 2000=1.79%, >=2000=0.14% 00:25:10.824 cpu : usr=0.89%, sys=0.96%, ctx=2142, majf=0, minf=1 00:25:10.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:25:10.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.824 issued rwts: total=0,3511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.824 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.824 job6: (groupid=0, jobs=1): err= 0: pid=1008325: Sat Jul 20 18:00:45 2024 00:25:10.824 write: IOPS=181, BW=45.5MiB/s (47.7MB/s)(462MiB/10169msec); 0 zone resets 00:25:10.824 slat (usec): min=25, max=987177, avg=4034.98, stdev=36931.44 00:25:10.824 clat (msec): min=17, max=3986, avg=347.66, stdev=617.94 00:25:10.824 lat (msec): min=17, max=3986, avg=351.70, stdev=619.70 00:25:10.824 clat percentiles (msec): 00:25:10.824 | 1.00th=[ 31], 5.00th=[ 59], 10.00th=[ 87], 20.00th=[ 122], 00:25:10.824 | 30.00th=[ 171], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 192], 00:25:10.824 | 70.00th=[ 203], 80.00th=[ 321], 90.00th=[ 558], 95.00th=[ 953], 00:25:10.824 | 99.00th=[ 3910], 99.50th=[ 3943], 99.90th=[ 3977], 99.95th=[ 3977], 00:25:10.824 | 99.99th=[ 3977] 00:25:10.824 bw ( KiB/s): min= 2048, max=123904, per=6.76%, avg=50798.78, stdev=39696.48, samples=18 00:25:10.824 iops : min= 8, max= 484, avg=198.39, stdev=155.09, samples=18 00:25:10.824 lat (msec) : 20=0.11%, 50=3.08%, 100=11.74%, 250=59.22%, 500=10.87% 00:25:10.824 lat (msec) : 750=7.84%, 1000=2.49%, 2000=1.24%, >=2000=3.41% 00:25:10.824 cpu : usr=0.50%, sys=0.42%, ctx=940, majf=0, minf=1 00:25:10.824 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:25:10.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.824 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.824 issued rwts: total=0,1849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.824 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.824 job7: (groupid=0, jobs=1): err= 0: pid=1008326: Sat Jul 20 18:00:45 2024 00:25:10.824 write: IOPS=109, BW=27.5MiB/s (28.8MB/s)(282MiB/10253msec); 0 zone resets 00:25:10.824 slat (usec): min=21, max=2376.2k, avg=3235.58, stdev=73400.39 00:25:10.824 clat (msec): min=7, max=5141, avg=578.98, stdev=958.32 00:25:10.824 lat (msec): min=7, max=5141, avg=582.21, stdev=962.36 00:25:10.824 clat percentiles (msec): 00:25:10.824 | 1.00th=[ 17], 5.00th=[ 29], 10.00th=[ 43], 20.00th=[ 93], 00:25:10.824 | 30.00th=[ 105], 40.00th=[ 125], 50.00th=[ 167], 60.00th=[ 257], 00:25:10.824 | 70.00th=[ 439], 80.00th=[ 527], 90.00th=[ 2500], 95.00th=[ 3272], 00:25:10.824 | 99.00th=[ 3540], 99.50th=[ 5067], 99.90th=[ 5134], 99.95th=[ 5134], 00:25:10.824 | 99.99th=[ 5134] 00:25:10.824 bw ( KiB/s): min=15360, max=75264, per=4.83%, avg=36288.00, stdev=17633.54, samples=15 00:25:10.824 iops : min= 60, max= 294, avg=141.67, stdev=68.87, samples=15 00:25:10.824 lat (msec) : 10=0.27%, 20=1.42%, 50=10.48%, 100=14.83%, 250=32.68% 00:25:10.824 lat (msec) : 500=16.96%, 750=4.97%, 1000=1.07%, 2000=7.02%, >=2000=10.30% 00:25:10.824 cpu : usr=0.39%, sys=0.40%, ctx=1007, majf=0, minf=1 00:25:10.824 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:25:10.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.824 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.824 issued rwts: total=0,1126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.824 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.824 job8: (groupid=0, jobs=1): err= 0: pid=1008327: Sat Jul 20 18:00:45 2024 00:25:10.824 write: IOPS=204, BW=51.2MiB/s (53.6MB/s)(527MiB/10301msec); 0 zone resets 00:25:10.824 slat (usec): min=17, max=2761.0k, avg=2581.06, stdev=77395.41 00:25:10.824 clat (msec): min=3, max=3504, avg=309.99, stdev=698.25 00:25:10.824 lat (msec): min=3, max=3504, avg=312.57, stdev=703.50 00:25:10.824 clat percentiles (msec): 00:25:10.824 | 1.00th=[ 15], 5.00th=[ 29], 10.00th=[ 42], 20.00th=[ 49], 00:25:10.824 | 30.00th=[ 67], 40.00th=[ 86], 50.00th=[ 104], 60.00th=[ 114], 00:25:10.824 | 70.00th=[ 140], 80.00th=[ 178], 90.00th=[ 489], 95.00th=[ 2567], 00:25:10.824 | 99.00th=[ 3306], 99.50th=[ 3339], 99.90th=[ 3507], 99.95th=[ 3507], 00:25:10.824 | 99.99th=[ 3507] 00:25:10.824 bw ( KiB/s): min= 4608, max=171008, per=12.66%, avg=95138.91, stdev=54351.07, samples=11 00:25:10.824 iops : min= 18, max= 668, avg=371.64, stdev=212.31, samples=11 00:25:10.824 lat (msec) : 4=0.09%, 10=0.52%, 20=1.80%, 50=18.50%, 100=27.42% 00:25:10.824 lat (msec) : 250=36.86%, 500=4.93%, 750=1.76%, 1000=0.19%, 2000=1.94% 00:25:10.824 lat (msec) : >=2000=5.98% 00:25:10.824 cpu : usr=0.57%, sys=0.65%, ctx=1964, majf=0, minf=1 00:25:10.824 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:25:10.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.824 issued rwts: total=0,2108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.824 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.824 job9: (groupid=0, jobs=1): err= 0: pid=1008330: Sat Jul 20 18:00:45 2024 00:25:10.824 write: IOPS=257, BW=64.5MiB/s (67.6MB/s)(656MiB/10163msec); 0 zone resets 00:25:10.824 slat (usec): min=23, max=261378, avg=2472.38, stdev=10538.82 00:25:10.824 clat (msec): min=8, max=2252, avg=245.46, stdev=293.91 00:25:10.824 lat (msec): min=8, max=2252, avg=247.93, stdev=294.74 00:25:10.824 clat percentiles (msec): 00:25:10.824 | 1.00th=[ 21], 5.00th=[ 39], 10.00th=[ 84], 20.00th=[ 104], 00:25:10.824 | 30.00th=[ 140], 40.00th=[ 171], 50.00th=[ 182], 60.00th=[ 188], 00:25:10.824 | 70.00th=[ 203], 80.00th=[ 271], 90.00th=[ 460], 95.00th=[ 701], 00:25:10.824 | 99.00th=[ 2165], 99.50th=[ 2232], 99.90th=[ 2232], 99.95th=[ 2265], 00:25:10.824 | 99.99th=[ 2265] 00:25:10.824 bw ( KiB/s): min=14307, max=160256, per=9.17%, avg=68956.79, stdev=36703.69, samples=19 00:25:10.824 iops : min= 55, max= 626, avg=269.32, stdev=143.45, samples=19 00:25:10.824 lat (msec) : 10=0.08%, 20=0.69%, 50=4.96%, 100=13.08%, 250=57.78% 00:25:10.824 lat (msec) : 500=15.45%, 750=4.27%, 1000=1.07%, 2000=1.30%, >=2000=1.33% 00:25:10.824 cpu : usr=0.60%, sys=0.66%, ctx=1417, majf=0, minf=1 00:25:10.824 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:25:10.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.824 issued rwts: total=0,2622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.824 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.824 job10: (groupid=0, jobs=1): err= 0: pid=1008331: Sat Jul 20 18:00:45 2024 00:25:10.824 write: IOPS=317, BW=79.3MiB/s (83.1MB/s)(811MiB/10235msec); 0 zone resets 00:25:10.824 slat (usec): min=24, max=248700, avg=2909.48, stdev=8318.88 00:25:10.824 clat (msec): min=16, max=500, avg=198.82, stdev=85.53 00:25:10.824 lat (msec): min=16, max=500, avg=201.73, stdev=86.36 00:25:10.824 clat percentiles (msec): 00:25:10.824 | 1.00th=[ 71], 5.00th=[ 99], 10.00th=[ 103], 20.00th=[ 112], 00:25:10.824 | 30.00th=[ 125], 40.00th=[ 144], 50.00th=[ 174], 60.00th=[ 249], 00:25:10.824 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 309], 95.00th=[ 342], 00:25:10.824 | 99.00th=[ 363], 99.50th=[ 430], 99.90th=[ 481], 99.95th=[ 502], 00:25:10.824 | 99.99th=[ 502] 00:25:10.824 bw ( KiB/s): min=49152, max=149504, per=10.84%, avg=81460.40, stdev=31636.91, samples=20 00:25:10.824 iops : min= 192, max= 584, avg=318.15, stdev=123.61, samples=20 00:25:10.824 lat (msec) : 20=0.12%, 50=0.52%, 100=7.18%, 250=53.10%, 500=39.01% 00:25:10.824 lat (msec) : 750=0.06% 00:25:10.824 cpu : usr=0.84%, sys=0.91%, ctx=990, majf=0, minf=1 00:25:10.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:10.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.824 issued rwts: total=0,3245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.824 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.824 00:25:10.824 Run status group 0 (all jobs): 00:25:10.824 WRITE: bw=734MiB/s (770MB/s), 27.5MiB/s-112MiB/s (28.8MB/s-117MB/s), io=7561MiB (7929MB), run=10098-10301msec 00:25:10.824 00:25:10.824 Disk stats (read/write): 00:25:10.824 nvme0n1: ios=49/5825, merge=0/0, ticks=234/1206311, in_queue=1206545, util=98.87% 00:25:10.824 nvme10n1: ios=49/3918, merge=0/0, ticks=276/1102049, in_queue=1102325, util=99.39% 00:25:10.824 nvme1n1: ios=13/8805, merge=0/0, ticks=21/1205632, in_queue=1205653, util=97.46% 00:25:10.824 nvme2n1: ios=39/6677, merge=0/0, ticks=743/1230550, in_queue=1231293, util=100.00% 00:25:10.824 nvme3n1: ios=47/5838, merge=0/0, ticks=6695/1038122, in_queue=1044817, util=100.00% 00:25:10.824 nvme4n1: ios=45/7020, merge=0/0, ticks=168/1248288, in_queue=1248456, util=99.22% 00:25:10.824 nvme5n1: ios=35/3515, merge=0/0, ticks=845/1220152, in_queue=1220997, util=100.00% 00:25:10.824 nvme6n1: ios=44/2199, merge=0/0, ticks=2265/1084072, in_queue=1086337, util=100.00% 00:25:10.824 nvme7n1: ios=0/4102, merge=0/0, ticks=0/953380, in_queue=953380, util=98.70% 00:25:10.824 nvme8n1: ios=41/5244, merge=0/0, ticks=2389/1242168, in_queue=1244557, util=100.00% 00:25:10.824 nvme9n1: ios=0/6452, merge=0/0, ticks=0/1231804, in_queue=1231804, util=99.09% 00:25:10.824 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:10.824 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:10.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:10.825 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:10.825 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:11.389 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:11.389 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:11.389 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:11.389 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:11.389 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:11.389 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:11.389 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:11.389 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:11.389 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:11.389 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.389 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.389 18:00:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.389 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.389 18:00:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:11.647 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:11.647 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:11.647 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:11.647 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:11.647 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:11.647 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:11.647 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:11.647 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:11.647 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:11.647 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.647 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.647 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.647 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.647 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:11.904 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:11.904 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:11.904 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:11.904 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:11.904 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:11.904 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:11.904 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:11.904 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:11.904 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:11.904 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.904 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.904 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.904 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.904 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:12.162 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:12.162 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.162 18:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:12.418 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:12.418 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:12.418 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:12.418 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:12.418 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:12.418 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:12.418 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:12.418 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:12.418 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:12.418 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.418 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.419 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.419 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.419 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:12.419 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:12.419 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:12.419 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:12.419 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:12.419 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:12.419 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:12.419 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:12.675 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:12.675 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:12.675 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:12.676 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:12.676 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:12.676 rmmod nvme_tcp 00:25:12.676 rmmod nvme_fabrics 00:25:12.676 rmmod nvme_keyring 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1002669 ']' 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1002669 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 1002669 ']' 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 1002669 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1002669 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1002669' 00:25:12.676 killing process with pid 1002669 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 1002669 00:25:12.676 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 1002669 00:25:13.239 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:13.239 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:13.239 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:13.239 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:13.239 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:13.239 18:00:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.239 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.239 18:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.770 18:00:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:15.770 00:25:15.770 real 1m0.414s 00:25:15.770 user 3m15.877s 00:25:15.770 sys 0m19.941s 00:25:15.770 18:00:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:15.770 18:00:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:15.770 ************************************ 00:25:15.770 END TEST nvmf_multiconnection 00:25:15.770 ************************************ 00:25:15.770 18:00:50 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:15.770 18:00:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:15.770 18:00:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:15.770 18:00:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:15.770 ************************************ 00:25:15.770 START TEST nvmf_initiator_timeout 00:25:15.770 ************************************ 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:15.770 * Looking for test storage... 00:25:15.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:15.770 18:00:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:17.667 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:17.667 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:17.667 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:17.667 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:17.667 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:17.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:25:17.668 00:25:17.668 --- 10.0.0.2 ping statistics --- 00:25:17.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.668 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:25:17.668 00:25:17.668 --- 10.0.0.1 ping statistics --- 00:25:17.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.668 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1011351 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1011351 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 1011351 ']' 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:17.668 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.668 [2024-07-20 18:00:52.244440] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:17.668 [2024-07-20 18:00:52.244526] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.668 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.668 [2024-07-20 18:00:52.320284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.668 [2024-07-20 18:00:52.416576] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.668 [2024-07-20 18:00:52.416650] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.668 [2024-07-20 18:00:52.416667] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.668 [2024-07-20 18:00:52.416681] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.668 [2024-07-20 18:00:52.416693] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.668 [2024-07-20 18:00:52.416765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.668 [2024-07-20 18:00:52.416837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.668 [2024-07-20 18:00:52.416882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:17.668 [2024-07-20 18:00:52.420807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.925 Malloc0 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.925 Delay0 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.925 [2024-07-20 18:00:52.612597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:17.925 [2024-07-20 18:00:52.640884] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.925 18:00:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:18.488 18:00:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:18.488 18:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:18.488 18:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.488 18:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:18.488 18:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:21.009 18:00:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:21.009 18:00:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:21.009 18:00:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:21.009 18:00:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:21.009 18:00:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:21.009 18:00:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:21.009 18:00:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1011733 00:25:21.009 18:00:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:21.009 18:00:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:21.009 [global] 00:25:21.009 thread=1 00:25:21.009 invalidate=1 00:25:21.009 rw=write 00:25:21.009 time_based=1 00:25:21.009 runtime=60 00:25:21.009 ioengine=libaio 00:25:21.009 direct=1 00:25:21.009 bs=4096 00:25:21.009 iodepth=1 00:25:21.009 norandommap=0 00:25:21.009 numjobs=1 00:25:21.009 00:25:21.009 verify_dump=1 00:25:21.009 verify_backlog=512 00:25:21.009 verify_state_save=0 00:25:21.009 do_verify=1 00:25:21.009 verify=crc32c-intel 00:25:21.009 [job0] 00:25:21.009 filename=/dev/nvme0n1 00:25:21.009 Could not set queue depth (nvme0n1) 00:25:21.009 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:21.009 fio-3.35 00:25:21.009 Starting 1 thread 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:23.527 true 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:23.527 true 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:23.527 true 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:23.527 true 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.527 18:00:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:26.848 true 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:26.848 true 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:26.848 true 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:26.848 true 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:26.848 18:01:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1011733 00:26:23.039 00:26:23.039 job0: (groupid=0, jobs=1): err= 0: pid=1011848: Sat Jul 20 18:01:55 2024 00:26:23.039 read: IOPS=56, BW=226KiB/s (231kB/s)(13.2MiB/60001msec) 00:26:23.039 slat (usec): min=6, max=13677, avg=24.26, stdev=234.76 00:26:23.039 clat (usec): min=473, max=41234k, avg=17221.93, stdev=708124.74 00:26:23.039 lat (usec): min=482, max=41234k, avg=17246.19, stdev=708124.58 00:26:23.039 clat percentiles (usec): 00:26:23.039 | 1.00th=[ 486], 5.00th=[ 498], 10.00th=[ 506], 00:26:23.039 | 20.00th=[ 519], 30.00th=[ 537], 40.00th=[ 562], 00:26:23.039 | 50.00th=[ 594], 60.00th=[ 652], 70.00th=[ 709], 00:26:23.039 | 80.00th=[ 766], 90.00th=[ 41157], 95.00th=[ 41157], 00:26:23.039 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 41681], 00:26:23.039 | 99.95th=[ 42206], 99.99th=[17112761] 00:26:23.039 write: IOPS=59, BW=239KiB/s (245kB/s)(14.0MiB/60001msec); 0 zone resets 00:26:23.039 slat (nsec): min=5935, max=97225, avg=22264.71, stdev=13191.73 00:26:23.039 clat (usec): min=301, max=829, avg=390.05, stdev=42.43 00:26:23.039 lat (usec): min=310, max=848, avg=412.32, stdev=51.42 00:26:23.039 clat percentiles (usec): 00:26:23.039 | 1.00th=[ 318], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 351], 00:26:23.040 | 30.00th=[ 363], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 396], 00:26:23.040 | 70.00th=[ 412], 80.00th=[ 429], 90.00th=[ 449], 95.00th=[ 465], 00:26:23.040 | 99.00th=[ 506], 99.50th=[ 515], 99.90th=[ 529], 99.95th=[ 553], 00:26:23.040 | 99.99th=[ 832] 00:26:23.040 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=7 00:26:23.040 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=7 00:26:23.040 lat (usec) : 500=53.46%, 750=35.86%, 1000=5.28% 00:26:23.040 lat (msec) : 2=0.01%, 4=0.01%, 50=5.36%, >=2000=0.01% 00:26:23.040 cpu : usr=0.17%, sys=0.29%, ctx=6976, majf=0, minf=2 00:26:23.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.040 issued rwts: total=3391,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:23.040 00:26:23.040 Run status group 0 (all jobs): 00:26:23.040 READ: bw=226KiB/s (231kB/s), 226KiB/s-226KiB/s (231kB/s-231kB/s), io=13.2MiB (13.9MB), run=60001-60001msec 00:26:23.040 WRITE: bw=239KiB/s (245kB/s), 239KiB/s-239KiB/s (245kB/s-245kB/s), io=14.0MiB (14.7MB), run=60001-60001msec 00:26:23.040 00:26:23.040 Disk stats (read/write): 00:26:23.040 nvme0n1: ios=3487/3584, merge=0/0, ticks=17154/1314, in_queue=18468, util=99.88% 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:23.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:23.040 nvmf hotplug test: fio successful as expected 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.040 rmmod nvme_tcp 00:26:23.040 rmmod nvme_fabrics 00:26:23.040 rmmod nvme_keyring 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1011351 ']' 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1011351 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 1011351 ']' 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 1011351 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1011351 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1011351' 00:26:23.040 killing process with pid 1011351 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 1011351 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 1011351 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.040 18:01:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.040 18:01:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.298 18:01:58 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:23.298 00:26:23.298 real 1m7.971s 00:26:23.298 user 4m10.543s 00:26:23.298 sys 0m6.387s 00:26:23.298 18:01:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:23.298 18:01:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.298 ************************************ 00:26:23.298 END TEST nvmf_initiator_timeout 00:26:23.298 ************************************ 00:26:23.298 18:01:58 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:23.298 18:01:58 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:23.298 18:01:58 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:23.298 18:01:58 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:23.298 18:01:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:25.823 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:25.823 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:25.823 18:02:00 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:25.824 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:25.824 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:25.824 18:02:00 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:25.824 18:02:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:25.824 18:02:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:25.824 18:02:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.824 ************************************ 00:26:25.824 START TEST nvmf_perf_adq 00:26:25.824 ************************************ 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:25.824 * Looking for test storage... 00:26:25.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:25.824 18:02:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:27.723 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:27.723 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:27.723 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:27.723 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:27.723 18:02:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:28.287 18:02:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:29.658 18:02:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:34.952 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:34.953 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:34.953 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:34.953 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:34.953 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:34.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:26:34.953 00:26:34.953 --- 10.0.0.2 ping statistics --- 00:26:34.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.953 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:34.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:26:34.953 00:26:34.953 --- 10.0.0.1 ping statistics --- 00:26:34.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.953 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1023348 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1023348 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1023348 ']' 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:34.953 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:34.953 [2024-07-20 18:02:09.531820] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:34.953 [2024-07-20 18:02:09.531895] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.953 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.953 [2024-07-20 18:02:09.606267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:34.953 [2024-07-20 18:02:09.698204] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.953 [2024-07-20 18:02:09.698259] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.953 [2024-07-20 18:02:09.698275] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.953 [2024-07-20 18:02:09.698290] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.953 [2024-07-20 18:02:09.698302] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.953 [2024-07-20 18:02:09.698398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.953 [2024-07-20 18:02:09.698489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:34.953 [2024-07-20 18:02:09.698466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.953 [2024-07-20 18:02:09.698492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:35.211 [2024-07-20 18:02:09.891325] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:35.211 Malloc1 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:35.211 [2024-07-20 18:02:09.942161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1023379 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:35.211 18:02:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:35.211 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.742 18:02:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:37.742 18:02:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.742 18:02:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:37.742 18:02:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.742 18:02:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:37.742 "tick_rate": 2700000000, 00:26:37.742 "poll_groups": [ 00:26:37.742 { 00:26:37.742 "name": "nvmf_tgt_poll_group_000", 00:26:37.742 "admin_qpairs": 1, 00:26:37.742 "io_qpairs": 1, 00:26:37.742 "current_admin_qpairs": 1, 00:26:37.742 "current_io_qpairs": 1, 00:26:37.742 "pending_bdev_io": 0, 00:26:37.742 "completed_nvme_io": 20038, 00:26:37.742 "transports": [ 00:26:37.742 { 00:26:37.742 "trtype": "TCP" 00:26:37.742 } 00:26:37.742 ] 00:26:37.742 }, 00:26:37.742 { 00:26:37.742 "name": "nvmf_tgt_poll_group_001", 00:26:37.742 "admin_qpairs": 0, 00:26:37.742 "io_qpairs": 1, 00:26:37.742 "current_admin_qpairs": 0, 00:26:37.742 "current_io_qpairs": 1, 00:26:37.742 "pending_bdev_io": 0, 00:26:37.742 "completed_nvme_io": 18629, 00:26:37.742 "transports": [ 00:26:37.742 { 00:26:37.742 "trtype": "TCP" 00:26:37.742 } 00:26:37.742 ] 00:26:37.742 }, 00:26:37.742 { 00:26:37.742 "name": "nvmf_tgt_poll_group_002", 00:26:37.742 "admin_qpairs": 0, 00:26:37.742 "io_qpairs": 1, 00:26:37.742 "current_admin_qpairs": 0, 00:26:37.742 "current_io_qpairs": 1, 00:26:37.742 "pending_bdev_io": 0, 00:26:37.742 "completed_nvme_io": 17253, 00:26:37.742 "transports": [ 00:26:37.742 { 00:26:37.742 "trtype": "TCP" 00:26:37.742 } 00:26:37.742 ] 00:26:37.742 }, 00:26:37.742 { 00:26:37.742 "name": "nvmf_tgt_poll_group_003", 00:26:37.742 "admin_qpairs": 0, 00:26:37.742 "io_qpairs": 1, 00:26:37.742 "current_admin_qpairs": 0, 00:26:37.742 "current_io_qpairs": 1, 00:26:37.742 "pending_bdev_io": 0, 00:26:37.742 "completed_nvme_io": 19113, 00:26:37.742 "transports": [ 00:26:37.742 { 00:26:37.742 "trtype": "TCP" 00:26:37.742 } 00:26:37.742 ] 00:26:37.742 } 00:26:37.742 ] 00:26:37.742 }' 00:26:37.742 18:02:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:37.742 18:02:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:37.742 18:02:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:37.742 18:02:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:37.742 18:02:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1023379 00:26:45.849 Initializing NVMe Controllers 00:26:45.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:45.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:45.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:45.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:45.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:45.849 Initialization complete. Launching workers. 00:26:45.849 ======================================================== 00:26:45.849 Latency(us) 00:26:45.849 Device Information : IOPS MiB/s Average min max 00:26:45.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10188.79 39.80 6282.52 1386.46 12353.53 00:26:45.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9865.60 38.54 6486.77 3752.77 9617.03 00:26:45.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9127.40 35.65 7013.14 1704.83 11706.30 00:26:45.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10574.29 41.31 6052.80 2387.90 9760.38 00:26:45.849 ======================================================== 00:26:45.849 Total : 39756.08 155.30 6439.85 1386.46 12353.53 00:26:45.849 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.849 rmmod nvme_tcp 00:26:45.849 rmmod nvme_fabrics 00:26:45.849 rmmod nvme_keyring 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1023348 ']' 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1023348 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1023348 ']' 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1023348 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1023348 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1023348' 00:26:45.849 killing process with pid 1023348 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1023348 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1023348 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.849 18:02:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.745 18:02:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:47.745 18:02:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:47.745 18:02:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:48.310 18:02:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:50.221 18:02:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:55.486 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:55.486 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:55.486 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:55.486 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.486 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:55.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:26:55.487 00:26:55.487 --- 10.0.0.2 ping statistics --- 00:26:55.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.487 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:26:55.487 00:26:55.487 --- 10.0.0.1 ping statistics --- 00:26:55.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.487 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:55.487 net.core.busy_poll = 1 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:55.487 net.core.busy_read = 1 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1025873 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1025873 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1025873 ']' 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:55.487 18:02:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.487 [2024-07-20 18:02:29.874453] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:55.487 [2024-07-20 18:02:29.874545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.487 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.487 [2024-07-20 18:02:29.939806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.487 [2024-07-20 18:02:30.036933] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.487 [2024-07-20 18:02:30.036992] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.487 [2024-07-20 18:02:30.037006] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.487 [2024-07-20 18:02:30.037018] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.487 [2024-07-20 18:02:30.037029] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.487 [2024-07-20 18:02:30.037090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.487 [2024-07-20 18:02:30.037152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.487 [2024-07-20 18:02:30.037217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.487 [2024-07-20 18:02:30.037219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.487 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.746 [2024-07-20 18:02:30.283753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.746 Malloc1 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.746 [2024-07-20 18:02:30.336940] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1026005 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:55.746 18:02:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:55.746 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.645 18:02:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:57.645 18:02:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.645 18:02:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.645 18:02:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.645 18:02:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:57.645 "tick_rate": 2700000000, 00:26:57.645 "poll_groups": [ 00:26:57.645 { 00:26:57.645 "name": "nvmf_tgt_poll_group_000", 00:26:57.645 "admin_qpairs": 1, 00:26:57.645 "io_qpairs": 1, 00:26:57.645 "current_admin_qpairs": 1, 00:26:57.645 "current_io_qpairs": 1, 00:26:57.645 "pending_bdev_io": 0, 00:26:57.645 "completed_nvme_io": 24171, 00:26:57.645 "transports": [ 00:26:57.645 { 00:26:57.645 "trtype": "TCP" 00:26:57.645 } 00:26:57.645 ] 00:26:57.645 }, 00:26:57.645 { 00:26:57.645 "name": "nvmf_tgt_poll_group_001", 00:26:57.645 "admin_qpairs": 0, 00:26:57.645 "io_qpairs": 3, 00:26:57.645 "current_admin_qpairs": 0, 00:26:57.645 "current_io_qpairs": 3, 00:26:57.645 "pending_bdev_io": 0, 00:26:57.645 "completed_nvme_io": 21958, 00:26:57.645 "transports": [ 00:26:57.645 { 00:26:57.645 "trtype": "TCP" 00:26:57.645 } 00:26:57.645 ] 00:26:57.645 }, 00:26:57.645 { 00:26:57.645 "name": "nvmf_tgt_poll_group_002", 00:26:57.645 "admin_qpairs": 0, 00:26:57.645 "io_qpairs": 0, 00:26:57.645 "current_admin_qpairs": 0, 00:26:57.645 "current_io_qpairs": 0, 00:26:57.645 "pending_bdev_io": 0, 00:26:57.645 "completed_nvme_io": 0, 00:26:57.645 "transports": [ 00:26:57.645 { 00:26:57.645 "trtype": "TCP" 00:26:57.645 } 00:26:57.645 ] 00:26:57.645 }, 00:26:57.645 { 00:26:57.645 "name": "nvmf_tgt_poll_group_003", 00:26:57.645 "admin_qpairs": 0, 00:26:57.645 "io_qpairs": 0, 00:26:57.645 "current_admin_qpairs": 0, 00:26:57.645 "current_io_qpairs": 0, 00:26:57.645 "pending_bdev_io": 0, 00:26:57.645 "completed_nvme_io": 0, 00:26:57.645 "transports": [ 00:26:57.645 { 00:26:57.645 "trtype": "TCP" 00:26:57.645 } 00:26:57.645 ] 00:26:57.645 } 00:26:57.645 ] 00:26:57.645 }' 00:26:57.645 18:02:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:57.645 18:02:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:57.645 18:02:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:57.645 18:02:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:57.645 18:02:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1026005 00:27:05.810 Initializing NVMe Controllers 00:27:05.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:05.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:05.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:05.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:05.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:05.810 Initialization complete. Launching workers. 00:27:05.810 ======================================================== 00:27:05.810 Latency(us) 00:27:05.810 Device Information : IOPS MiB/s Average min max 00:27:05.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3967.60 15.50 16179.49 2446.08 61938.97 00:27:05.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3518.20 13.74 18195.31 1804.36 64043.85 00:27:05.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4224.40 16.50 15204.78 1983.18 61771.04 00:27:05.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12847.60 50.19 4981.46 1537.76 9097.06 00:27:05.810 ======================================================== 00:27:05.810 Total : 24557.80 95.93 10442.28 1537.76 64043.85 00:27:05.810 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:05.810 rmmod nvme_tcp 00:27:05.810 rmmod nvme_fabrics 00:27:05.810 rmmod nvme_keyring 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1025873 ']' 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1025873 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1025873 ']' 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1025873 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1025873 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1025873' 00:27:05.810 killing process with pid 1025873 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1025873 00:27:05.810 18:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1025873 00:27:06.069 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:06.069 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:06.069 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:06.069 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:06.069 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:06.069 18:02:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.069 18:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.069 18:02:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.398 18:02:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:09.398 18:02:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:09.398 00:27:09.398 real 0m43.740s 00:27:09.398 user 2m30.538s 00:27:09.398 sys 0m12.127s 00:27:09.398 18:02:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:09.398 18:02:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:09.398 ************************************ 00:27:09.398 END TEST nvmf_perf_adq 00:27:09.398 ************************************ 00:27:09.398 18:02:43 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:09.398 18:02:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:09.398 18:02:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:09.398 18:02:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:09.398 ************************************ 00:27:09.398 START TEST nvmf_shutdown 00:27:09.398 ************************************ 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:09.398 * Looking for test storage... 00:27:09.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:09.398 18:02:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:09.398 ************************************ 00:27:09.398 START TEST nvmf_shutdown_tc1 00:27:09.398 ************************************ 00:27:09.398 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:09.398 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:09.398 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:09.398 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:09.398 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.398 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:09.398 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:09.398 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:09.398 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.398 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:09.398 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.399 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:09.399 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:09.399 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:09.399 18:02:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:11.321 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.321 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:11.321 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:11.322 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:11.322 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.322 18:02:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.322 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.322 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.322 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:11.322 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.322 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.322 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.322 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:11.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:27:11.322 00:27:11.322 --- 10.0.0.2 ping statistics --- 00:27:11.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.322 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:27:11.322 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:27:11.580 00:27:11.581 --- 10.0.0.1 ping statistics --- 00:27:11.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.581 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1029302 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1029302 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1029302 ']' 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:11.581 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:11.581 [2024-07-20 18:02:46.193772] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:11.581 [2024-07-20 18:02:46.193866] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.581 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.581 [2024-07-20 18:02:46.258266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:11.581 [2024-07-20 18:02:46.343414] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:11.581 [2024-07-20 18:02:46.343468] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:11.581 [2024-07-20 18:02:46.343492] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:11.581 [2024-07-20 18:02:46.343504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:11.581 [2024-07-20 18:02:46.343515] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:11.581 [2024-07-20 18:02:46.343609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:11.581 [2024-07-20 18:02:46.343674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:11.581 [2024-07-20 18:02:46.343741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:11.581 [2024-07-20 18:02:46.343743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:11.840 [2024-07-20 18:02:46.497721] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.840 18:02:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:11.840 Malloc1 00:27:11.840 [2024-07-20 18:02:46.587480] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.840 Malloc2 00:27:12.099 Malloc3 00:27:12.099 Malloc4 00:27:12.099 Malloc5 00:27:12.099 Malloc6 00:27:12.099 Malloc7 00:27:12.359 Malloc8 00:27:12.359 Malloc9 00:27:12.359 Malloc10 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1029478 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1029478 /var/tmp/bdevperf.sock 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1029478 ']' 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:12.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.359 { 00:27:12.359 "params": { 00:27:12.359 "name": "Nvme$subsystem", 00:27:12.359 "trtype": "$TEST_TRANSPORT", 00:27:12.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.359 "adrfam": "ipv4", 00:27:12.359 "trsvcid": "$NVMF_PORT", 00:27:12.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.359 "hdgst": ${hdgst:-false}, 00:27:12.359 "ddgst": ${ddgst:-false} 00:27:12.359 }, 00:27:12.359 "method": "bdev_nvme_attach_controller" 00:27:12.359 } 00:27:12.359 EOF 00:27:12.359 )") 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.359 { 00:27:12.359 "params": { 00:27:12.359 "name": "Nvme$subsystem", 00:27:12.359 "trtype": "$TEST_TRANSPORT", 00:27:12.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.359 "adrfam": "ipv4", 00:27:12.359 "trsvcid": "$NVMF_PORT", 00:27:12.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.359 "hdgst": ${hdgst:-false}, 00:27:12.359 "ddgst": ${ddgst:-false} 00:27:12.359 }, 00:27:12.359 "method": "bdev_nvme_attach_controller" 00:27:12.359 } 00:27:12.359 EOF 00:27:12.359 )") 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.359 { 00:27:12.359 "params": { 00:27:12.359 "name": "Nvme$subsystem", 00:27:12.359 "trtype": "$TEST_TRANSPORT", 00:27:12.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.359 "adrfam": "ipv4", 00:27:12.359 "trsvcid": "$NVMF_PORT", 00:27:12.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.359 "hdgst": ${hdgst:-false}, 00:27:12.359 "ddgst": ${ddgst:-false} 00:27:12.359 }, 00:27:12.359 "method": "bdev_nvme_attach_controller" 00:27:12.359 } 00:27:12.359 EOF 00:27:12.359 )") 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.359 { 00:27:12.359 "params": { 00:27:12.359 "name": "Nvme$subsystem", 00:27:12.359 "trtype": "$TEST_TRANSPORT", 00:27:12.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.359 "adrfam": "ipv4", 00:27:12.359 "trsvcid": "$NVMF_PORT", 00:27:12.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.359 "hdgst": ${hdgst:-false}, 00:27:12.359 "ddgst": ${ddgst:-false} 00:27:12.359 }, 00:27:12.359 "method": "bdev_nvme_attach_controller" 00:27:12.359 } 00:27:12.359 EOF 00:27:12.359 )") 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.359 { 00:27:12.359 "params": { 00:27:12.359 "name": "Nvme$subsystem", 00:27:12.359 "trtype": "$TEST_TRANSPORT", 00:27:12.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.359 "adrfam": "ipv4", 00:27:12.359 "trsvcid": "$NVMF_PORT", 00:27:12.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.359 "hdgst": ${hdgst:-false}, 00:27:12.359 "ddgst": ${ddgst:-false} 00:27:12.359 }, 00:27:12.359 "method": "bdev_nvme_attach_controller" 00:27:12.359 } 00:27:12.359 EOF 00:27:12.359 )") 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.359 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.359 { 00:27:12.359 "params": { 00:27:12.359 "name": "Nvme$subsystem", 00:27:12.359 "trtype": "$TEST_TRANSPORT", 00:27:12.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.359 "adrfam": "ipv4", 00:27:12.359 "trsvcid": "$NVMF_PORT", 00:27:12.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.360 "hdgst": ${hdgst:-false}, 00:27:12.360 "ddgst": ${ddgst:-false} 00:27:12.360 }, 00:27:12.360 "method": "bdev_nvme_attach_controller" 00:27:12.360 } 00:27:12.360 EOF 00:27:12.360 )") 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.360 { 00:27:12.360 "params": { 00:27:12.360 "name": "Nvme$subsystem", 00:27:12.360 "trtype": "$TEST_TRANSPORT", 00:27:12.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.360 "adrfam": "ipv4", 00:27:12.360 "trsvcid": "$NVMF_PORT", 00:27:12.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.360 "hdgst": ${hdgst:-false}, 00:27:12.360 "ddgst": ${ddgst:-false} 00:27:12.360 }, 00:27:12.360 "method": "bdev_nvme_attach_controller" 00:27:12.360 } 00:27:12.360 EOF 00:27:12.360 )") 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.360 { 00:27:12.360 "params": { 00:27:12.360 "name": "Nvme$subsystem", 00:27:12.360 "trtype": "$TEST_TRANSPORT", 00:27:12.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.360 "adrfam": "ipv4", 00:27:12.360 "trsvcid": "$NVMF_PORT", 00:27:12.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.360 "hdgst": ${hdgst:-false}, 00:27:12.360 "ddgst": ${ddgst:-false} 00:27:12.360 }, 00:27:12.360 "method": "bdev_nvme_attach_controller" 00:27:12.360 } 00:27:12.360 EOF 00:27:12.360 )") 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.360 { 00:27:12.360 "params": { 00:27:12.360 "name": "Nvme$subsystem", 00:27:12.360 "trtype": "$TEST_TRANSPORT", 00:27:12.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.360 "adrfam": "ipv4", 00:27:12.360 "trsvcid": "$NVMF_PORT", 00:27:12.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.360 "hdgst": ${hdgst:-false}, 00:27:12.360 "ddgst": ${ddgst:-false} 00:27:12.360 }, 00:27:12.360 "method": "bdev_nvme_attach_controller" 00:27:12.360 } 00:27:12.360 EOF 00:27:12.360 )") 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.360 { 00:27:12.360 "params": { 00:27:12.360 "name": "Nvme$subsystem", 00:27:12.360 "trtype": "$TEST_TRANSPORT", 00:27:12.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.360 "adrfam": "ipv4", 00:27:12.360 "trsvcid": "$NVMF_PORT", 00:27:12.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.360 "hdgst": ${hdgst:-false}, 00:27:12.360 "ddgst": ${ddgst:-false} 00:27:12.360 }, 00:27:12.360 "method": "bdev_nvme_attach_controller" 00:27:12.360 } 00:27:12.360 EOF 00:27:12.360 )") 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:12.360 18:02:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:12.360 "params": { 00:27:12.360 "name": "Nvme1", 00:27:12.360 "trtype": "tcp", 00:27:12.360 "traddr": "10.0.0.2", 00:27:12.360 "adrfam": "ipv4", 00:27:12.360 "trsvcid": "4420", 00:27:12.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:12.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:12.360 "hdgst": false, 00:27:12.360 "ddgst": false 00:27:12.360 }, 00:27:12.360 "method": "bdev_nvme_attach_controller" 00:27:12.360 },{ 00:27:12.360 "params": { 00:27:12.360 "name": "Nvme2", 00:27:12.360 "trtype": "tcp", 00:27:12.360 "traddr": "10.0.0.2", 00:27:12.360 "adrfam": "ipv4", 00:27:12.360 "trsvcid": "4420", 00:27:12.360 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:12.360 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:12.360 "hdgst": false, 00:27:12.360 "ddgst": false 00:27:12.360 }, 00:27:12.360 "method": "bdev_nvme_attach_controller" 00:27:12.360 },{ 00:27:12.360 "params": { 00:27:12.360 "name": "Nvme3", 00:27:12.360 "trtype": "tcp", 00:27:12.360 "traddr": "10.0.0.2", 00:27:12.360 "adrfam": "ipv4", 00:27:12.360 "trsvcid": "4420", 00:27:12.360 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:12.360 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:12.360 "hdgst": false, 00:27:12.360 "ddgst": false 00:27:12.360 }, 00:27:12.360 "method": "bdev_nvme_attach_controller" 00:27:12.360 },{ 00:27:12.360 "params": { 00:27:12.360 "name": "Nvme4", 00:27:12.360 "trtype": "tcp", 00:27:12.360 "traddr": "10.0.0.2", 00:27:12.360 "adrfam": "ipv4", 00:27:12.360 "trsvcid": "4420", 00:27:12.360 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:12.360 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:12.360 "hdgst": false, 00:27:12.360 "ddgst": false 00:27:12.360 }, 00:27:12.360 "method": "bdev_nvme_attach_controller" 00:27:12.360 },{ 00:27:12.360 "params": { 00:27:12.360 "name": "Nvme5", 00:27:12.360 "trtype": "tcp", 00:27:12.360 "traddr": "10.0.0.2", 00:27:12.360 "adrfam": "ipv4", 00:27:12.360 "trsvcid": "4420", 00:27:12.360 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:12.360 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:12.360 "hdgst": false, 00:27:12.360 "ddgst": false 00:27:12.360 }, 00:27:12.360 "method": "bdev_nvme_attach_controller" 00:27:12.360 },{ 00:27:12.360 "params": { 00:27:12.360 "name": "Nvme6", 00:27:12.360 "trtype": "tcp", 00:27:12.360 "traddr": "10.0.0.2", 00:27:12.360 "adrfam": "ipv4", 00:27:12.360 "trsvcid": "4420", 00:27:12.360 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:12.360 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:12.360 "hdgst": false, 00:27:12.360 "ddgst": false 00:27:12.360 }, 00:27:12.361 "method": "bdev_nvme_attach_controller" 00:27:12.361 },{ 00:27:12.361 "params": { 00:27:12.361 "name": "Nvme7", 00:27:12.361 "trtype": "tcp", 00:27:12.361 "traddr": "10.0.0.2", 00:27:12.361 "adrfam": "ipv4", 00:27:12.361 "trsvcid": "4420", 00:27:12.361 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:12.361 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:12.361 "hdgst": false, 00:27:12.361 "ddgst": false 00:27:12.361 }, 00:27:12.361 "method": "bdev_nvme_attach_controller" 00:27:12.361 },{ 00:27:12.361 "params": { 00:27:12.361 "name": "Nvme8", 00:27:12.361 "trtype": "tcp", 00:27:12.361 "traddr": "10.0.0.2", 00:27:12.361 "adrfam": "ipv4", 00:27:12.361 "trsvcid": "4420", 00:27:12.361 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:12.361 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:12.361 "hdgst": false, 00:27:12.361 "ddgst": false 00:27:12.361 }, 00:27:12.361 "method": "bdev_nvme_attach_controller" 00:27:12.361 },{ 00:27:12.361 "params": { 00:27:12.361 "name": "Nvme9", 00:27:12.361 "trtype": "tcp", 00:27:12.361 "traddr": "10.0.0.2", 00:27:12.361 "adrfam": "ipv4", 00:27:12.361 "trsvcid": "4420", 00:27:12.361 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:12.361 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:12.361 "hdgst": false, 00:27:12.361 "ddgst": false 00:27:12.361 }, 00:27:12.361 "method": "bdev_nvme_attach_controller" 00:27:12.361 },{ 00:27:12.361 "params": { 00:27:12.361 "name": "Nvme10", 00:27:12.361 "trtype": "tcp", 00:27:12.361 "traddr": "10.0.0.2", 00:27:12.361 "adrfam": "ipv4", 00:27:12.361 "trsvcid": "4420", 00:27:12.361 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:12.361 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:12.361 "hdgst": false, 00:27:12.361 "ddgst": false 00:27:12.361 }, 00:27:12.361 "method": "bdev_nvme_attach_controller" 00:27:12.361 }' 00:27:12.361 [2024-07-20 18:02:47.105961] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:12.361 [2024-07-20 18:02:47.106041] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:12.361 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.619 [2024-07-20 18:02:47.169523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.619 [2024-07-20 18:02:47.255874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.514 18:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:14.514 18:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:14.514 18:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:14.514 18:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.514 18:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:14.514 18:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.514 18:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1029478 00:27:14.514 18:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:14.514 18:02:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:15.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1029478 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:15.445 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1029302 00:27:15.445 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:15.445 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:15.445 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:15.445 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:15.445 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.445 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.445 { 00:27:15.445 "params": { 00:27:15.445 "name": "Nvme$subsystem", 00:27:15.445 "trtype": "$TEST_TRANSPORT", 00:27:15.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.445 "adrfam": "ipv4", 00:27:15.445 "trsvcid": "$NVMF_PORT", 00:27:15.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.445 "hdgst": ${hdgst:-false}, 00:27:15.445 "ddgst": ${ddgst:-false} 00:27:15.445 }, 00:27:15.445 "method": "bdev_nvme_attach_controller" 00:27:15.445 } 00:27:15.445 EOF 00:27:15.445 )") 00:27:15.445 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:15.445 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.445 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.445 { 00:27:15.445 "params": { 00:27:15.445 "name": "Nvme$subsystem", 00:27:15.445 "trtype": "$TEST_TRANSPORT", 00:27:15.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.445 "adrfam": "ipv4", 00:27:15.445 "trsvcid": "$NVMF_PORT", 00:27:15.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.445 "hdgst": ${hdgst:-false}, 00:27:15.445 "ddgst": ${ddgst:-false} 00:27:15.445 }, 00:27:15.445 "method": "bdev_nvme_attach_controller" 00:27:15.445 } 00:27:15.446 EOF 00:27:15.446 )") 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.446 { 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme$subsystem", 00:27:15.446 "trtype": "$TEST_TRANSPORT", 00:27:15.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "$NVMF_PORT", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.446 "hdgst": ${hdgst:-false}, 00:27:15.446 "ddgst": ${ddgst:-false} 00:27:15.446 }, 00:27:15.446 "method": "bdev_nvme_attach_controller" 00:27:15.446 } 00:27:15.446 EOF 00:27:15.446 )") 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.446 { 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme$subsystem", 00:27:15.446 "trtype": "$TEST_TRANSPORT", 00:27:15.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "$NVMF_PORT", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.446 "hdgst": ${hdgst:-false}, 00:27:15.446 "ddgst": ${ddgst:-false} 00:27:15.446 }, 00:27:15.446 "method": "bdev_nvme_attach_controller" 00:27:15.446 } 00:27:15.446 EOF 00:27:15.446 )") 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.446 { 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme$subsystem", 00:27:15.446 "trtype": "$TEST_TRANSPORT", 00:27:15.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "$NVMF_PORT", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.446 "hdgst": ${hdgst:-false}, 00:27:15.446 "ddgst": ${ddgst:-false} 00:27:15.446 }, 00:27:15.446 "method": "bdev_nvme_attach_controller" 00:27:15.446 } 00:27:15.446 EOF 00:27:15.446 )") 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.446 { 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme$subsystem", 00:27:15.446 "trtype": "$TEST_TRANSPORT", 00:27:15.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "$NVMF_PORT", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.446 "hdgst": ${hdgst:-false}, 00:27:15.446 "ddgst": ${ddgst:-false} 00:27:15.446 }, 00:27:15.446 "method": "bdev_nvme_attach_controller" 00:27:15.446 } 00:27:15.446 EOF 00:27:15.446 )") 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.446 { 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme$subsystem", 00:27:15.446 "trtype": "$TEST_TRANSPORT", 00:27:15.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "$NVMF_PORT", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.446 "hdgst": ${hdgst:-false}, 00:27:15.446 "ddgst": ${ddgst:-false} 00:27:15.446 }, 00:27:15.446 "method": "bdev_nvme_attach_controller" 00:27:15.446 } 00:27:15.446 EOF 00:27:15.446 )") 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.446 { 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme$subsystem", 00:27:15.446 "trtype": "$TEST_TRANSPORT", 00:27:15.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "$NVMF_PORT", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.446 "hdgst": ${hdgst:-false}, 00:27:15.446 "ddgst": ${ddgst:-false} 00:27:15.446 }, 00:27:15.446 "method": "bdev_nvme_attach_controller" 00:27:15.446 } 00:27:15.446 EOF 00:27:15.446 )") 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.446 { 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme$subsystem", 00:27:15.446 "trtype": "$TEST_TRANSPORT", 00:27:15.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "$NVMF_PORT", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.446 "hdgst": ${hdgst:-false}, 00:27:15.446 "ddgst": ${ddgst:-false} 00:27:15.446 }, 00:27:15.446 "method": "bdev_nvme_attach_controller" 00:27:15.446 } 00:27:15.446 EOF 00:27:15.446 )") 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.446 { 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme$subsystem", 00:27:15.446 "trtype": "$TEST_TRANSPORT", 00:27:15.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "$NVMF_PORT", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.446 "hdgst": ${hdgst:-false}, 00:27:15.446 "ddgst": ${ddgst:-false} 00:27:15.446 }, 00:27:15.446 "method": "bdev_nvme_attach_controller" 00:27:15.446 } 00:27:15.446 EOF 00:27:15.446 )") 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:15.446 18:02:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme1", 00:27:15.446 "trtype": "tcp", 00:27:15.446 "traddr": "10.0.0.2", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "4420", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:15.446 "hdgst": false, 00:27:15.446 "ddgst": false 00:27:15.446 }, 00:27:15.446 "method": "bdev_nvme_attach_controller" 00:27:15.446 },{ 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme2", 00:27:15.446 "trtype": "tcp", 00:27:15.446 "traddr": "10.0.0.2", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "4420", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:15.446 "hdgst": false, 00:27:15.446 "ddgst": false 00:27:15.446 }, 00:27:15.446 "method": "bdev_nvme_attach_controller" 00:27:15.446 },{ 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme3", 00:27:15.446 "trtype": "tcp", 00:27:15.446 "traddr": "10.0.0.2", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "4420", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:15.446 "hdgst": false, 00:27:15.446 "ddgst": false 00:27:15.446 }, 00:27:15.446 "method": "bdev_nvme_attach_controller" 00:27:15.446 },{ 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme4", 00:27:15.446 "trtype": "tcp", 00:27:15.446 "traddr": "10.0.0.2", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "4420", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:15.446 "hdgst": false, 00:27:15.446 "ddgst": false 00:27:15.446 }, 00:27:15.446 "method": "bdev_nvme_attach_controller" 00:27:15.446 },{ 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme5", 00:27:15.446 "trtype": "tcp", 00:27:15.446 "traddr": "10.0.0.2", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "4420", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:15.446 "hdgst": false, 00:27:15.446 "ddgst": false 00:27:15.446 }, 00:27:15.446 "method": "bdev_nvme_attach_controller" 00:27:15.446 },{ 00:27:15.446 "params": { 00:27:15.446 "name": "Nvme6", 00:27:15.446 "trtype": "tcp", 00:27:15.446 "traddr": "10.0.0.2", 00:27:15.446 "adrfam": "ipv4", 00:27:15.446 "trsvcid": "4420", 00:27:15.446 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:15.446 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:15.446 "hdgst": false, 00:27:15.446 "ddgst": false 00:27:15.446 }, 00:27:15.447 "method": "bdev_nvme_attach_controller" 00:27:15.447 },{ 00:27:15.447 "params": { 00:27:15.447 "name": "Nvme7", 00:27:15.447 "trtype": "tcp", 00:27:15.447 "traddr": "10.0.0.2", 00:27:15.447 "adrfam": "ipv4", 00:27:15.447 "trsvcid": "4420", 00:27:15.447 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:15.447 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:15.447 "hdgst": false, 00:27:15.447 "ddgst": false 00:27:15.447 }, 00:27:15.447 "method": "bdev_nvme_attach_controller" 00:27:15.447 },{ 00:27:15.447 "params": { 00:27:15.447 "name": "Nvme8", 00:27:15.447 "trtype": "tcp", 00:27:15.447 "traddr": "10.0.0.2", 00:27:15.447 "adrfam": "ipv4", 00:27:15.447 "trsvcid": "4420", 00:27:15.447 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:15.447 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:15.447 "hdgst": false, 00:27:15.447 "ddgst": false 00:27:15.447 }, 00:27:15.447 "method": "bdev_nvme_attach_controller" 00:27:15.447 },{ 00:27:15.447 "params": { 00:27:15.447 "name": "Nvme9", 00:27:15.447 "trtype": "tcp", 00:27:15.447 "traddr": "10.0.0.2", 00:27:15.447 "adrfam": "ipv4", 00:27:15.447 "trsvcid": "4420", 00:27:15.447 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:15.447 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:15.447 "hdgst": false, 00:27:15.447 "ddgst": false 00:27:15.447 }, 00:27:15.447 "method": "bdev_nvme_attach_controller" 00:27:15.447 },{ 00:27:15.447 "params": { 00:27:15.447 "name": "Nvme10", 00:27:15.447 "trtype": "tcp", 00:27:15.447 "traddr": "10.0.0.2", 00:27:15.447 "adrfam": "ipv4", 00:27:15.447 "trsvcid": "4420", 00:27:15.447 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:15.447 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:15.447 "hdgst": false, 00:27:15.447 "ddgst": false 00:27:15.447 }, 00:27:15.447 "method": "bdev_nvme_attach_controller" 00:27:15.447 }' 00:27:15.447 [2024-07-20 18:02:50.107956] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:15.447 [2024-07-20 18:02:50.108044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029780 ] 00:27:15.447 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.447 [2024-07-20 18:02:50.173500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.704 [2024-07-20 18:02:50.264299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.075 Running I/O for 1 seconds... 00:27:18.445 00:27:18.445 Latency(us) 00:27:18.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.445 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.445 Verification LBA range: start 0x0 length 0x400 00:27:18.445 Nvme1n1 : 1.20 160.15 10.01 0.00 0.00 395807.35 23495.87 374380.47 00:27:18.445 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.445 Verification LBA range: start 0x0 length 0x400 00:27:18.445 Nvme2n1 : 1.08 295.96 18.50 0.00 0.00 208707.89 25049.32 219035.88 00:27:18.445 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.445 Verification LBA range: start 0x0 length 0x400 00:27:18.445 Nvme3n1 : 1.08 118.94 7.43 0.00 0.00 514600.96 47768.46 428751.08 00:27:18.445 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.445 Verification LBA range: start 0x0 length 0x400 00:27:18.445 Nvme4n1 : 1.26 152.37 9.52 0.00 0.00 385332.46 48351.00 382147.70 00:27:18.445 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.445 Verification LBA range: start 0x0 length 0x400 00:27:18.446 Nvme5n1 : 1.27 151.65 9.48 0.00 0.00 381606.12 44661.57 394575.27 00:27:18.446 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.446 Verification LBA range: start 0x0 length 0x400 00:27:18.446 Nvme6n1 : 1.13 227.50 14.22 0.00 0.00 255946.52 24855.13 276513.37 00:27:18.446 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.446 Verification LBA range: start 0x0 length 0x400 00:27:18.446 Nvme7n1 : 1.17 218.34 13.65 0.00 0.00 263424.76 26214.40 279620.27 00:27:18.446 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.446 Verification LBA range: start 0x0 length 0x400 00:27:18.446 Nvme8n1 : 1.18 324.10 20.26 0.00 0.00 174650.91 18835.53 205054.86 00:27:18.446 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.446 Verification LBA range: start 0x0 length 0x400 00:27:18.446 Nvme9n1 : 1.21 270.04 16.88 0.00 0.00 198575.86 8058.50 236123.78 00:27:18.446 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.446 Verification LBA range: start 0x0 length 0x400 00:27:18.446 Nvme10n1 : 1.22 263.35 16.46 0.00 0.00 208604.73 21554.06 257872.02 00:27:18.446 =================================================================================================================== 00:27:18.446 Total : 2182.39 136.40 0.00 0.00 267867.35 8058.50 428751.08 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:18.702 rmmod nvme_tcp 00:27:18.702 rmmod nvme_fabrics 00:27:18.702 rmmod nvme_keyring 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1029302 ']' 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1029302 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 1029302 ']' 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 1029302 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1029302 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1029302' 00:27:18.702 killing process with pid 1029302 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 1029302 00:27:18.702 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 1029302 00:27:19.265 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:19.265 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:19.265 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:19.265 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:19.265 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:19.265 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.265 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.265 18:02:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.194 18:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:21.194 00:27:21.194 real 0m11.939s 00:27:21.194 user 0m34.563s 00:27:21.194 sys 0m3.320s 00:27:21.194 18:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:21.194 18:02:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:21.194 ************************************ 00:27:21.194 END TEST nvmf_shutdown_tc1 00:27:21.194 ************************************ 00:27:21.194 18:02:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:21.194 18:02:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:21.194 18:02:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:21.194 18:02:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:21.453 ************************************ 00:27:21.453 START TEST nvmf_shutdown_tc2 00:27:21.453 ************************************ 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:21.453 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:21.453 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:21.453 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:21.453 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.453 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:21.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:27:21.454 00:27:21.454 --- 10.0.0.2 ping statistics --- 00:27:21.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.454 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:27:21.454 00:27:21.454 --- 10.0.0.1 ping statistics --- 00:27:21.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.454 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1030669 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1030669 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1030669 ']' 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:21.454 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.454 [2024-07-20 18:02:56.226999] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:21.454 [2024-07-20 18:02:56.227097] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.711 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.711 [2024-07-20 18:02:56.290757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:21.712 [2024-07-20 18:02:56.376329] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.712 [2024-07-20 18:02:56.376380] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.712 [2024-07-20 18:02:56.376408] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.712 [2024-07-20 18:02:56.376420] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.712 [2024-07-20 18:02:56.376430] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.712 [2024-07-20 18:02:56.376516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.712 [2024-07-20 18:02:56.376543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.712 [2024-07-20 18:02:56.376601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:21.712 [2024-07-20 18:02:56.376603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.712 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:21.712 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:21.712 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:21.712 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:21.712 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.969 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.969 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:21.969 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.969 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.969 [2024-07-20 18:02:56.533622] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.969 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.969 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.970 18:02:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.970 Malloc1 00:27:21.970 [2024-07-20 18:02:56.623359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.970 Malloc2 00:27:21.970 Malloc3 00:27:21.970 Malloc4 00:27:22.228 Malloc5 00:27:22.228 Malloc6 00:27:22.228 Malloc7 00:27:22.228 Malloc8 00:27:22.228 Malloc9 00:27:22.486 Malloc10 00:27:22.486 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.486 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:22.486 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1030780 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1030780 /var/tmp/bdevperf.sock 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1030780 ']' 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:22.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.487 { 00:27:22.487 "params": { 00:27:22.487 "name": "Nvme$subsystem", 00:27:22.487 "trtype": "$TEST_TRANSPORT", 00:27:22.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.487 "adrfam": "ipv4", 00:27:22.487 "trsvcid": "$NVMF_PORT", 00:27:22.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.487 "hdgst": ${hdgst:-false}, 00:27:22.487 "ddgst": ${ddgst:-false} 00:27:22.487 }, 00:27:22.487 "method": "bdev_nvme_attach_controller" 00:27:22.487 } 00:27:22.487 EOF 00:27:22.487 )") 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.487 { 00:27:22.487 "params": { 00:27:22.487 "name": "Nvme$subsystem", 00:27:22.487 "trtype": "$TEST_TRANSPORT", 00:27:22.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.487 "adrfam": "ipv4", 00:27:22.487 "trsvcid": "$NVMF_PORT", 00:27:22.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.487 "hdgst": ${hdgst:-false}, 00:27:22.487 "ddgst": ${ddgst:-false} 00:27:22.487 }, 00:27:22.487 "method": "bdev_nvme_attach_controller" 00:27:22.487 } 00:27:22.487 EOF 00:27:22.487 )") 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.487 { 00:27:22.487 "params": { 00:27:22.487 "name": "Nvme$subsystem", 00:27:22.487 "trtype": "$TEST_TRANSPORT", 00:27:22.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.487 "adrfam": "ipv4", 00:27:22.487 "trsvcid": "$NVMF_PORT", 00:27:22.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.487 "hdgst": ${hdgst:-false}, 00:27:22.487 "ddgst": ${ddgst:-false} 00:27:22.487 }, 00:27:22.487 "method": "bdev_nvme_attach_controller" 00:27:22.487 } 00:27:22.487 EOF 00:27:22.487 )") 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.487 { 00:27:22.487 "params": { 00:27:22.487 "name": "Nvme$subsystem", 00:27:22.487 "trtype": "$TEST_TRANSPORT", 00:27:22.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.487 "adrfam": "ipv4", 00:27:22.487 "trsvcid": "$NVMF_PORT", 00:27:22.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.487 "hdgst": ${hdgst:-false}, 00:27:22.487 "ddgst": ${ddgst:-false} 00:27:22.487 }, 00:27:22.487 "method": "bdev_nvme_attach_controller" 00:27:22.487 } 00:27:22.487 EOF 00:27:22.487 )") 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.487 { 00:27:22.487 "params": { 00:27:22.487 "name": "Nvme$subsystem", 00:27:22.487 "trtype": "$TEST_TRANSPORT", 00:27:22.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.487 "adrfam": "ipv4", 00:27:22.487 "trsvcid": "$NVMF_PORT", 00:27:22.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.487 "hdgst": ${hdgst:-false}, 00:27:22.487 "ddgst": ${ddgst:-false} 00:27:22.487 }, 00:27:22.487 "method": "bdev_nvme_attach_controller" 00:27:22.487 } 00:27:22.487 EOF 00:27:22.487 )") 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.487 { 00:27:22.487 "params": { 00:27:22.487 "name": "Nvme$subsystem", 00:27:22.487 "trtype": "$TEST_TRANSPORT", 00:27:22.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.487 "adrfam": "ipv4", 00:27:22.487 "trsvcid": "$NVMF_PORT", 00:27:22.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.487 "hdgst": ${hdgst:-false}, 00:27:22.487 "ddgst": ${ddgst:-false} 00:27:22.487 }, 00:27:22.487 "method": "bdev_nvme_attach_controller" 00:27:22.487 } 00:27:22.487 EOF 00:27:22.487 )") 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.487 { 00:27:22.487 "params": { 00:27:22.487 "name": "Nvme$subsystem", 00:27:22.487 "trtype": "$TEST_TRANSPORT", 00:27:22.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.487 "adrfam": "ipv4", 00:27:22.487 "trsvcid": "$NVMF_PORT", 00:27:22.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.487 "hdgst": ${hdgst:-false}, 00:27:22.487 "ddgst": ${ddgst:-false} 00:27:22.487 }, 00:27:22.487 "method": "bdev_nvme_attach_controller" 00:27:22.487 } 00:27:22.487 EOF 00:27:22.487 )") 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.487 { 00:27:22.487 "params": { 00:27:22.487 "name": "Nvme$subsystem", 00:27:22.487 "trtype": "$TEST_TRANSPORT", 00:27:22.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.487 "adrfam": "ipv4", 00:27:22.487 "trsvcid": "$NVMF_PORT", 00:27:22.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.487 "hdgst": ${hdgst:-false}, 00:27:22.487 "ddgst": ${ddgst:-false} 00:27:22.487 }, 00:27:22.487 "method": "bdev_nvme_attach_controller" 00:27:22.487 } 00:27:22.487 EOF 00:27:22.487 )") 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.487 { 00:27:22.487 "params": { 00:27:22.487 "name": "Nvme$subsystem", 00:27:22.487 "trtype": "$TEST_TRANSPORT", 00:27:22.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.487 "adrfam": "ipv4", 00:27:22.487 "trsvcid": "$NVMF_PORT", 00:27:22.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.487 "hdgst": ${hdgst:-false}, 00:27:22.487 "ddgst": ${ddgst:-false} 00:27:22.487 }, 00:27:22.487 "method": "bdev_nvme_attach_controller" 00:27:22.487 } 00:27:22.487 EOF 00:27:22.487 )") 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.487 { 00:27:22.487 "params": { 00:27:22.487 "name": "Nvme$subsystem", 00:27:22.487 "trtype": "$TEST_TRANSPORT", 00:27:22.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.487 "adrfam": "ipv4", 00:27:22.487 "trsvcid": "$NVMF_PORT", 00:27:22.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.487 "hdgst": ${hdgst:-false}, 00:27:22.487 "ddgst": ${ddgst:-false} 00:27:22.487 }, 00:27:22.487 "method": "bdev_nvme_attach_controller" 00:27:22.487 } 00:27:22.487 EOF 00:27:22.487 )") 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:22.487 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:22.488 18:02:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:22.488 "params": { 00:27:22.488 "name": "Nvme1", 00:27:22.488 "trtype": "tcp", 00:27:22.488 "traddr": "10.0.0.2", 00:27:22.488 "adrfam": "ipv4", 00:27:22.488 "trsvcid": "4420", 00:27:22.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:22.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:22.488 "hdgst": false, 00:27:22.488 "ddgst": false 00:27:22.488 }, 00:27:22.488 "method": "bdev_nvme_attach_controller" 00:27:22.488 },{ 00:27:22.488 "params": { 00:27:22.488 "name": "Nvme2", 00:27:22.488 "trtype": "tcp", 00:27:22.488 "traddr": "10.0.0.2", 00:27:22.488 "adrfam": "ipv4", 00:27:22.488 "trsvcid": "4420", 00:27:22.488 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:22.488 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:22.488 "hdgst": false, 00:27:22.488 "ddgst": false 00:27:22.488 }, 00:27:22.488 "method": "bdev_nvme_attach_controller" 00:27:22.488 },{ 00:27:22.488 "params": { 00:27:22.488 "name": "Nvme3", 00:27:22.488 "trtype": "tcp", 00:27:22.488 "traddr": "10.0.0.2", 00:27:22.488 "adrfam": "ipv4", 00:27:22.488 "trsvcid": "4420", 00:27:22.488 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:22.488 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:22.488 "hdgst": false, 00:27:22.488 "ddgst": false 00:27:22.488 }, 00:27:22.488 "method": "bdev_nvme_attach_controller" 00:27:22.488 },{ 00:27:22.488 "params": { 00:27:22.488 "name": "Nvme4", 00:27:22.488 "trtype": "tcp", 00:27:22.488 "traddr": "10.0.0.2", 00:27:22.488 "adrfam": "ipv4", 00:27:22.488 "trsvcid": "4420", 00:27:22.488 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:22.488 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:22.488 "hdgst": false, 00:27:22.488 "ddgst": false 00:27:22.488 }, 00:27:22.488 "method": "bdev_nvme_attach_controller" 00:27:22.488 },{ 00:27:22.488 "params": { 00:27:22.488 "name": "Nvme5", 00:27:22.488 "trtype": "tcp", 00:27:22.488 "traddr": "10.0.0.2", 00:27:22.488 "adrfam": "ipv4", 00:27:22.488 "trsvcid": "4420", 00:27:22.488 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:22.488 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:22.488 "hdgst": false, 00:27:22.488 "ddgst": false 00:27:22.488 }, 00:27:22.488 "method": "bdev_nvme_attach_controller" 00:27:22.488 },{ 00:27:22.488 "params": { 00:27:22.488 "name": "Nvme6", 00:27:22.488 "trtype": "tcp", 00:27:22.488 "traddr": "10.0.0.2", 00:27:22.488 "adrfam": "ipv4", 00:27:22.488 "trsvcid": "4420", 00:27:22.488 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:22.488 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:22.488 "hdgst": false, 00:27:22.488 "ddgst": false 00:27:22.488 }, 00:27:22.488 "method": "bdev_nvme_attach_controller" 00:27:22.488 },{ 00:27:22.488 "params": { 00:27:22.488 "name": "Nvme7", 00:27:22.488 "trtype": "tcp", 00:27:22.488 "traddr": "10.0.0.2", 00:27:22.488 "adrfam": "ipv4", 00:27:22.488 "trsvcid": "4420", 00:27:22.488 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:22.488 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:22.488 "hdgst": false, 00:27:22.488 "ddgst": false 00:27:22.488 }, 00:27:22.488 "method": "bdev_nvme_attach_controller" 00:27:22.488 },{ 00:27:22.488 "params": { 00:27:22.488 "name": "Nvme8", 00:27:22.488 "trtype": "tcp", 00:27:22.488 "traddr": "10.0.0.2", 00:27:22.488 "adrfam": "ipv4", 00:27:22.488 "trsvcid": "4420", 00:27:22.488 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:22.488 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:22.488 "hdgst": false, 00:27:22.488 "ddgst": false 00:27:22.488 }, 00:27:22.488 "method": "bdev_nvme_attach_controller" 00:27:22.488 },{ 00:27:22.488 "params": { 00:27:22.488 "name": "Nvme9", 00:27:22.488 "trtype": "tcp", 00:27:22.488 "traddr": "10.0.0.2", 00:27:22.488 "adrfam": "ipv4", 00:27:22.488 "trsvcid": "4420", 00:27:22.488 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:22.488 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:22.488 "hdgst": false, 00:27:22.488 "ddgst": false 00:27:22.488 }, 00:27:22.488 "method": "bdev_nvme_attach_controller" 00:27:22.488 },{ 00:27:22.488 "params": { 00:27:22.488 "name": "Nvme10", 00:27:22.488 "trtype": "tcp", 00:27:22.488 "traddr": "10.0.0.2", 00:27:22.488 "adrfam": "ipv4", 00:27:22.488 "trsvcid": "4420", 00:27:22.488 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:22.488 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:22.488 "hdgst": false, 00:27:22.488 "ddgst": false 00:27:22.488 }, 00:27:22.488 "method": "bdev_nvme_attach_controller" 00:27:22.488 }' 00:27:22.488 [2024-07-20 18:02:57.138685] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:22.488 [2024-07-20 18:02:57.138763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030780 ] 00:27:22.488 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.488 [2024-07-20 18:02:57.203887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.745 [2024-07-20 18:02:57.291190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.642 Running I/O for 10 seconds... 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:24.642 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:24.900 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:24.900 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:24.900 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:24.900 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:24.900 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.900 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.900 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.900 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=73 00:27:24.900 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 73 -ge 100 ']' 00:27:24.900 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=142 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 142 -ge 100 ']' 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1030780 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1030780 ']' 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1030780 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1030780 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1030780' 00:27:25.158 killing process with pid 1030780 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1030780 00:27:25.158 18:02:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1030780 00:27:25.416 Received shutdown signal, test time was about 0.975842 seconds 00:27:25.416 00:27:25.416 Latency(us) 00:27:25.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.416 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.416 Verification LBA range: start 0x0 length 0x400 00:27:25.416 Nvme1n1 : 0.95 269.27 16.83 0.00 0.00 232791.99 13786.83 268746.15 00:27:25.416 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.416 Verification LBA range: start 0x0 length 0x400 00:27:25.416 Nvme2n1 : 0.95 201.13 12.57 0.00 0.00 308427.28 25631.86 287387.50 00:27:25.416 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.416 Verification LBA range: start 0x0 length 0x400 00:27:25.416 Nvme3n1 : 0.96 200.19 12.51 0.00 0.00 303726.24 24369.68 301368.51 00:27:25.416 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.416 Verification LBA range: start 0x0 length 0x400 00:27:25.416 Nvme4n1 : 0.92 207.91 12.99 0.00 0.00 285589.05 22719.15 243891.01 00:27:25.416 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.416 Verification LBA range: start 0x0 length 0x400 00:27:25.416 Nvme5n1 : 0.91 209.86 13.12 0.00 0.00 276420.33 22913.33 270299.59 00:27:25.416 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.416 Verification LBA range: start 0x0 length 0x400 00:27:25.416 Nvme6n1 : 0.93 206.86 12.93 0.00 0.00 275245.13 41748.86 250104.79 00:27:25.416 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.416 Verification LBA range: start 0x0 length 0x400 00:27:25.416 Nvme7n1 : 0.98 196.92 12.31 0.00 0.00 284772.38 23301.69 324670.20 00:27:25.416 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.416 Verification LBA range: start 0x0 length 0x400 00:27:25.416 Nvme8n1 : 0.96 265.84 16.62 0.00 0.00 206081.33 22622.06 285834.05 00:27:25.416 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.416 Verification LBA range: start 0x0 length 0x400 00:27:25.416 Nvme9n1 : 0.94 204.92 12.81 0.00 0.00 260041.39 26020.22 248551.35 00:27:25.416 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.416 Verification LBA range: start 0x0 length 0x400 00:27:25.416 Nvme10n1 : 0.95 202.74 12.67 0.00 0.00 257641.69 22233.69 285834.05 00:27:25.416 =================================================================================================================== 00:27:25.416 Total : 2165.64 135.35 0.00 0.00 265971.37 13786.83 324670.20 00:27:25.674 18:03:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1030669 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:26.633 rmmod nvme_tcp 00:27:26.633 rmmod nvme_fabrics 00:27:26.633 rmmod nvme_keyring 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1030669 ']' 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1030669 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1030669 ']' 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1030669 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1030669 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1030669' 00:27:26.633 killing process with pid 1030669 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1030669 00:27:26.633 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1030669 00:27:27.200 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:27.200 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:27.200 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:27.200 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.200 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:27.200 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.200 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.200 18:03:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:29.099 00:27:29.099 real 0m7.820s 00:27:29.099 user 0m23.806s 00:27:29.099 sys 0m1.549s 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.099 ************************************ 00:27:29.099 END TEST nvmf_shutdown_tc2 00:27:29.099 ************************************ 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:29.099 ************************************ 00:27:29.099 START TEST nvmf_shutdown_tc3 00:27:29.099 ************************************ 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.099 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:29.100 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:29.100 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:29.100 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:29.100 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:29.100 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.357 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.357 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.357 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.357 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:29.357 18:03:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.357 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.357 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.357 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:29.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:27:29.357 00:27:29.357 --- 10.0.0.2 ping statistics --- 00:27:29.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.357 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:27:29.357 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:27:29.357 00:27:29.357 --- 10.0.0.1 ping statistics --- 00:27:29.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.357 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:27:29.357 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.357 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:29.357 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:29.357 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.357 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:29.357 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1031869 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1031869 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1031869 ']' 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:29.358 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.358 [2024-07-20 18:03:04.100756] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:29.358 [2024-07-20 18:03:04.100857] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.358 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.615 [2024-07-20 18:03:04.170589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.615 [2024-07-20 18:03:04.259539] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.615 [2024-07-20 18:03:04.259607] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.615 [2024-07-20 18:03:04.259621] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.615 [2024-07-20 18:03:04.259632] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.615 [2024-07-20 18:03:04.259642] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.615 [2024-07-20 18:03:04.259707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.615 [2024-07-20 18:03:04.259757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:29.615 [2024-07-20 18:03:04.259759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.615 [2024-07-20 18:03:04.259737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.615 [2024-07-20 18:03:04.396390] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.615 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.872 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.872 Malloc1 00:27:29.872 [2024-07-20 18:03:04.471598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.872 Malloc2 00:27:29.872 Malloc3 00:27:29.872 Malloc4 00:27:29.872 Malloc5 00:27:30.129 Malloc6 00:27:30.129 Malloc7 00:27:30.129 Malloc8 00:27:30.129 Malloc9 00:27:30.129 Malloc10 00:27:30.129 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.129 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:30.129 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.129 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1031931 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1031931 /var/tmp/bdevperf.sock 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1031931 ']' 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:30.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.404 { 00:27:30.404 "params": { 00:27:30.404 "name": "Nvme$subsystem", 00:27:30.404 "trtype": "$TEST_TRANSPORT", 00:27:30.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.404 "adrfam": "ipv4", 00:27:30.404 "trsvcid": "$NVMF_PORT", 00:27:30.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.404 "hdgst": ${hdgst:-false}, 00:27:30.404 "ddgst": ${ddgst:-false} 00:27:30.404 }, 00:27:30.404 "method": "bdev_nvme_attach_controller" 00:27:30.404 } 00:27:30.404 EOF 00:27:30.404 )") 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.404 { 00:27:30.404 "params": { 00:27:30.404 "name": "Nvme$subsystem", 00:27:30.404 "trtype": "$TEST_TRANSPORT", 00:27:30.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.404 "adrfam": "ipv4", 00:27:30.404 "trsvcid": "$NVMF_PORT", 00:27:30.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.404 "hdgst": ${hdgst:-false}, 00:27:30.404 "ddgst": ${ddgst:-false} 00:27:30.404 }, 00:27:30.404 "method": "bdev_nvme_attach_controller" 00:27:30.404 } 00:27:30.404 EOF 00:27:30.404 )") 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.404 { 00:27:30.404 "params": { 00:27:30.404 "name": "Nvme$subsystem", 00:27:30.404 "trtype": "$TEST_TRANSPORT", 00:27:30.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.404 "adrfam": "ipv4", 00:27:30.404 "trsvcid": "$NVMF_PORT", 00:27:30.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.404 "hdgst": ${hdgst:-false}, 00:27:30.404 "ddgst": ${ddgst:-false} 00:27:30.404 }, 00:27:30.404 "method": "bdev_nvme_attach_controller" 00:27:30.404 } 00:27:30.404 EOF 00:27:30.404 )") 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.404 { 00:27:30.404 "params": { 00:27:30.404 "name": "Nvme$subsystem", 00:27:30.404 "trtype": "$TEST_TRANSPORT", 00:27:30.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.404 "adrfam": "ipv4", 00:27:30.404 "trsvcid": "$NVMF_PORT", 00:27:30.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.404 "hdgst": ${hdgst:-false}, 00:27:30.404 "ddgst": ${ddgst:-false} 00:27:30.404 }, 00:27:30.404 "method": "bdev_nvme_attach_controller" 00:27:30.404 } 00:27:30.404 EOF 00:27:30.404 )") 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.404 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.404 { 00:27:30.404 "params": { 00:27:30.404 "name": "Nvme$subsystem", 00:27:30.404 "trtype": "$TEST_TRANSPORT", 00:27:30.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "$NVMF_PORT", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.405 "hdgst": ${hdgst:-false}, 00:27:30.405 "ddgst": ${ddgst:-false} 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 } 00:27:30.405 EOF 00:27:30.405 )") 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.405 { 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme$subsystem", 00:27:30.405 "trtype": "$TEST_TRANSPORT", 00:27:30.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "$NVMF_PORT", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.405 "hdgst": ${hdgst:-false}, 00:27:30.405 "ddgst": ${ddgst:-false} 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 } 00:27:30.405 EOF 00:27:30.405 )") 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.405 { 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme$subsystem", 00:27:30.405 "trtype": "$TEST_TRANSPORT", 00:27:30.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "$NVMF_PORT", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.405 "hdgst": ${hdgst:-false}, 00:27:30.405 "ddgst": ${ddgst:-false} 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 } 00:27:30.405 EOF 00:27:30.405 )") 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.405 { 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme$subsystem", 00:27:30.405 "trtype": "$TEST_TRANSPORT", 00:27:30.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "$NVMF_PORT", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.405 "hdgst": ${hdgst:-false}, 00:27:30.405 "ddgst": ${ddgst:-false} 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 } 00:27:30.405 EOF 00:27:30.405 )") 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.405 { 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme$subsystem", 00:27:30.405 "trtype": "$TEST_TRANSPORT", 00:27:30.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "$NVMF_PORT", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.405 "hdgst": ${hdgst:-false}, 00:27:30.405 "ddgst": ${ddgst:-false} 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 } 00:27:30.405 EOF 00:27:30.405 )") 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.405 { 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme$subsystem", 00:27:30.405 "trtype": "$TEST_TRANSPORT", 00:27:30.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "$NVMF_PORT", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.405 "hdgst": ${hdgst:-false}, 00:27:30.405 "ddgst": ${ddgst:-false} 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 } 00:27:30.405 EOF 00:27:30.405 )") 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:30.405 18:03:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme1", 00:27:30.405 "trtype": "tcp", 00:27:30.405 "traddr": "10.0.0.2", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "4420", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:30.405 "hdgst": false, 00:27:30.405 "ddgst": false 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 },{ 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme2", 00:27:30.405 "trtype": "tcp", 00:27:30.405 "traddr": "10.0.0.2", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "4420", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:30.405 "hdgst": false, 00:27:30.405 "ddgst": false 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 },{ 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme3", 00:27:30.405 "trtype": "tcp", 00:27:30.405 "traddr": "10.0.0.2", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "4420", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:30.405 "hdgst": false, 00:27:30.405 "ddgst": false 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 },{ 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme4", 00:27:30.405 "trtype": "tcp", 00:27:30.405 "traddr": "10.0.0.2", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "4420", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:30.405 "hdgst": false, 00:27:30.405 "ddgst": false 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 },{ 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme5", 00:27:30.405 "trtype": "tcp", 00:27:30.405 "traddr": "10.0.0.2", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "4420", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:30.405 "hdgst": false, 00:27:30.405 "ddgst": false 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 },{ 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme6", 00:27:30.405 "trtype": "tcp", 00:27:30.405 "traddr": "10.0.0.2", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "4420", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:30.405 "hdgst": false, 00:27:30.405 "ddgst": false 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 },{ 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme7", 00:27:30.405 "trtype": "tcp", 00:27:30.405 "traddr": "10.0.0.2", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "4420", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:30.405 "hdgst": false, 00:27:30.405 "ddgst": false 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 },{ 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme8", 00:27:30.405 "trtype": "tcp", 00:27:30.405 "traddr": "10.0.0.2", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "4420", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:30.405 "hdgst": false, 00:27:30.405 "ddgst": false 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 },{ 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme9", 00:27:30.405 "trtype": "tcp", 00:27:30.405 "traddr": "10.0.0.2", 00:27:30.405 "adrfam": "ipv4", 00:27:30.405 "trsvcid": "4420", 00:27:30.405 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:30.405 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:30.405 "hdgst": false, 00:27:30.405 "ddgst": false 00:27:30.405 }, 00:27:30.405 "method": "bdev_nvme_attach_controller" 00:27:30.405 },{ 00:27:30.405 "params": { 00:27:30.405 "name": "Nvme10", 00:27:30.405 "trtype": "tcp", 00:27:30.406 "traddr": "10.0.0.2", 00:27:30.406 "adrfam": "ipv4", 00:27:30.406 "trsvcid": "4420", 00:27:30.406 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:30.406 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:30.406 "hdgst": false, 00:27:30.406 "ddgst": false 00:27:30.406 }, 00:27:30.406 "method": "bdev_nvme_attach_controller" 00:27:30.406 }' 00:27:30.406 [2024-07-20 18:03:04.986341] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:30.406 [2024-07-20 18:03:04.986434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031931 ] 00:27:30.406 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.406 [2024-07-20 18:03:05.051696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.406 [2024-07-20 18:03:05.138456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.304 Running I/O for 10 seconds... 00:27:32.304 18:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:32.304 18:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:32.304 18:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:32.304 18:03:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.304 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:32.566 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:32.825 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:32.825 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:32.825 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:32.825 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:32.825 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.825 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:32.825 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.825 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:32.825 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:32.825 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1031869 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 1031869 ']' 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 1031869 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1031869 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1031869' 00:27:33.093 killing process with pid 1031869 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 1031869 00:27:33.093 18:03:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 1031869 00:27:33.093 [2024-07-20 18:03:07.821862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.821940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.821957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.821970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.821983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.821997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.822763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bda90 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.825009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.825048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.825074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.093 [2024-07-20 18:03:07.825104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.825997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.826437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bdf30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.829159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bed30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.829192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26bed30 is same with the state(5) to be set 00:27:33.094 [2024-07-20 18:03:07.829449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.094 [2024-07-20 18:03:07.829494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.094 [2024-07-20 18:03:07.829524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.094 [2024-07-20 18:03:07.829541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.094 [2024-07-20 18:03:07.829558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.094 [2024-07-20 18:03:07.829572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.094 [2024-07-20 18:03:07.829589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.094 [2024-07-20 18:03:07.829603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.094 [2024-07-20 18:03:07.829619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.094 [2024-07-20 18:03:07.829634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.094 [2024-07-20 18:03:07.829650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.094 [2024-07-20 18:03:07.829665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.094 [2024-07-20 18:03:07.829680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.094 [2024-07-20 18:03:07.829694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.094 [2024-07-20 18:03:07.829710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.094 [2024-07-20 18:03:07.829724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.094 [2024-07-20 18:03:07.829740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.094 [2024-07-20 18:03:07.829754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.094 [2024-07-20 18:03:07.829775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.094 [2024-07-20 18:03:07.829806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.094 [2024-07-20 18:03:07.829824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.094 [2024-07-20 18:03:07.829839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.094 [2024-07-20 18:03:07.829855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.094 [2024-07-20 18:03:07.829869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.094 [2024-07-20 18:03:07.829885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.094 [2024-07-20 18:03:07.829900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.094 [2024-07-20 18:03:07.829915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.829930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.829929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.829946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.829961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.829964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.829977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.829992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.829990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 18:03:07.830085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with [2024-07-20 18:03:07.830155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:1the state(5) to be set 00:27:33.095 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with [2024-07-20 18:03:07.830203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:33.095 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:1[2024-07-20 18:03:07.830312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with [2024-07-20 18:03:07.830389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:1the state(5) to be set 00:27:33.095 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:1[2024-07-20 18:03:07.830452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with [2024-07-20 18:03:07.830505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:33.095 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with [2024-07-20 18:03:07.830623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:1the state(5) to be set 00:27:33.095 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-07-20 18:03:07.830767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.095 [2024-07-20 18:03:07.830765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.095 [2024-07-20 18:03:07.830800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.830802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.830817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.830828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with [2024-07-20 18:03:07.830834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128the state(5) to be set 00:27:33.096 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.830850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.830852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.830867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.830878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.830885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.830903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128[2024-07-20 18:03:07.830899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.830921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.830925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.830936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.830951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 18:03:07.830948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.830970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.830973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.830985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.830996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128[2024-07-20 18:03:07.831061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128[2024-07-20 18:03:07.831096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with [2024-07-20 18:03:07.831192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:33.096 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 18:03:07.831271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with [2024-07-20 18:03:07.831323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:12the state(5) to be set 00:27:33.096 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with [2024-07-20 18:03:07.831368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:33.096 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with [2024-07-20 18:03:07.831434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:33.096 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8ac0 is same with the state(5) to be set 00:27:33.096 [2024-07-20 18:03:07.831484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-07-20 18:03:07.831612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.831654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.096 [2024-07-20 18:03:07.832187] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x36e39d0 was disconnected and freed. reset controller. 00:27:33.096 [2024-07-20 18:03:07.832299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.096 [2024-07-20 18:03:07.832322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.832338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.096 [2024-07-20 18:03:07.832351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.832365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.096 [2024-07-20 18:03:07.832379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.096 [2024-07-20 18:03:07.832398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.832412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.832426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a73c60 is same with the state(5) to be set 00:27:33.097 [2024-07-20 18:03:07.832467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.832487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.832502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.832516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.832530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.832543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.832557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.832571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.832584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28c4700 is same with the state(5) to be set 00:27:33.097 [2024-07-20 18:03:07.832629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.832649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.832664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.832677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.832691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.832705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.832719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.832732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.832746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28f0770 is same with the state(5) to be set 00:27:33.097 [2024-07-20 18:03:07.832827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.832848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.832863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.832877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.832895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.832909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.832923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.832937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.832950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248cdf0 is same with the state(5) to be set 00:27:33.097 [2024-07-20 18:03:07.832994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.833014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.833029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.833043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.833043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with [2024-07-20 18:03:07.833058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:27:33.097 id:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.833073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.833081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.097 [2024-07-20 18:03:07.833087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.097 [2024-07-20 18:03:07.833096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.097 [2024-07-20 18:03:07.833112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with [2024-07-20 18:03:07.833112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:27:33.097 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.097 [2024-07-20 18:03:07.833127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with [2024-07-20 18:03:07.833128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28cca60 is same the state(5) to be set 00:27:33.097 with the state(5) to be set 00:27:33.097 [2024-07-20 18:03:07.833142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.097 [2024-07-20 18:03:07.833155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.097 [2024-07-20 18:03:07.833167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.097 [2024-07-20 18:03:07.833174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-20 18:03:07.833180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:33.097 the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with [2024-07-20 18:03:07.833196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:27:33.098 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.833210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.098 [2024-07-20 18:03:07.833231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.833246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.098 [2024-07-20 18:03:07.833259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.833273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.098 [2024-07-20 18:03:07.833287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.833300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28eaf50 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.098 [2024-07-20 18:03:07.833366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.833370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.098 [2024-07-20 18:03:07.833387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.833400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.098 [2024-07-20 18:03:07.833414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.833428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.098 [2024-07-20 18:03:07.833441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.833454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28c5400 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.833997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.834015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.834027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.834040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.834052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.834064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.834077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c8f60 is same with the state(5) to be set 00:27:33.098 [2024-07-20 18:03:07.834474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.098 [2024-07-20 18:03:07.834500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.834521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.098 [2024-07-20 18:03:07.834537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.834553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.098 [2024-07-20 18:03:07.834568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.834599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.098 [2024-07-20 18:03:07.834614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.834635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.098 [2024-07-20 18:03:07.834650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.834666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.098 [2024-07-20 18:03:07.834685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.834701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.098 [2024-07-20 18:03:07.834715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.098 [2024-07-20 18:03:07.834730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.098 [2024-07-20 18:03:07.834745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.834760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.834805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.834824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.834844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.834861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.834875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.834891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.834906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.834922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.834936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.834952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.834967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.834983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.834997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 18:03:07.835391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:12[2024-07-20 18:03:07.835506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with [2024-07-20 18:03:07.835520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:33.099 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:12[2024-07-20 18:03:07.835574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with [2024-07-20 18:03:07.835588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:33.099 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:12[2024-07-20 18:03:07.835672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 18:03:07.835700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.099 [2024-07-20 18:03:07.835730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.099 [2024-07-20 18:03:07.835744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.099 [2024-07-20 18:03:07.835757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with [2024-07-20 18:03:07.835757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:12the state(5) to be set 00:27:33.099 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.835771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with [2024-07-20 18:03:07.835773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:33.100 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.835800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.835822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.835833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.835840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.835846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.835856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.835860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.835871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.835874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.835886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.835892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.835906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 18:03:07.835907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.835922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.835924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.835935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.835944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.835948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.835962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with [2024-07-20 18:03:07.835961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:12the state(5) to be set 00:27:33.100 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.835976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.835977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.835989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.835995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with [2024-07-20 18:03:07.836088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:12the state(5) to be set 00:27:33.100 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 18:03:07.836111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with [2024-07-20 18:03:07.836239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:12the state(5) to be set 00:27:33.100 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with [2024-07-20 18:03:07.836306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:12the state(5) to be set 00:27:33.100 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c98c0 is same with the state(5) to be set 00:27:33.100 [2024-07-20 18:03:07.836322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.100 [2024-07-20 18:03:07.836618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.100 [2024-07-20 18:03:07.836632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.836647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.101 [2024-07-20 18:03:07.836662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.836744] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2a43940 was disconnected and freed. reset controller. 00:27:33.101 [2024-07-20 18:03:07.857463] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:33.101 [2024-07-20 18:03:07.857554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a73c60 (9): Bad file descriptor 00:27:33.101 [2024-07-20 18:03:07.857654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.101 [2024-07-20 18:03:07.857679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.857695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.101 [2024-07-20 18:03:07.857708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.857723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.101 [2024-07-20 18:03:07.857736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.857750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.101 [2024-07-20 18:03:07.857764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.857787] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a73e40 is same with the state(5) to be set 00:27:33.101 [2024-07-20 18:03:07.857832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28c4700 (9): Bad file descriptor 00:27:33.101 [2024-07-20 18:03:07.857863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28f0770 (9): Bad file descriptor 00:27:33.101 [2024-07-20 18:03:07.857914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.101 [2024-07-20 18:03:07.857934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.857949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.101 [2024-07-20 18:03:07.857963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.857977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.101 [2024-07-20 18:03:07.857990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.101 [2024-07-20 18:03:07.858017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a6bfd0 is same with the state(5) to be set 00:27:33.101 [2024-07-20 18:03:07.858077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.101 [2024-07-20 18:03:07.858105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.101 [2024-07-20 18:03:07.858146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.101 [2024-07-20 18:03:07.858174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.101 [2024-07-20 18:03:07.858201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bb610 is same with the state(5) to be set 00:27:33.101 [2024-07-20 18:03:07.858242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248cdf0 (9): Bad file descriptor 00:27:33.101 [2024-07-20 18:03:07.858271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28cca60 (9): Bad file descriptor 00:27:33.101 [2024-07-20 18:03:07.858300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28eaf50 (9): Bad file descriptor 00:27:33.101 [2024-07-20 18:03:07.858324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28c5400 (9): Bad file descriptor 00:27:33.101 [2024-07-20 18:03:07.858533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.101 [2024-07-20 18:03:07.858557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.101 [2024-07-20 18:03:07.858594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.101 [2024-07-20 18:03:07.858625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.101 [2024-07-20 18:03:07.858655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.101 [2024-07-20 18:03:07.858685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.101 [2024-07-20 18:03:07.858714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.101 [2024-07-20 18:03:07.858744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.101 [2024-07-20 18:03:07.858783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.101 [2024-07-20 18:03:07.858830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.101 [2024-07-20 18:03:07.858861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.101 [2024-07-20 18:03:07.858876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.102 [2024-07-20 18:03:07.858890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.102 [2024-07-20 18:03:07.858906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.102 [2024-07-20 18:03:07.858920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.102 [2024-07-20 18:03:07.858936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.102 [2024-07-20 18:03:07.858950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.102 [2024-07-20 18:03:07.858965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.102 [2024-07-20 18:03:07.858979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.102 [2024-07-20 18:03:07.858995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.102 [2024-07-20 18:03:07.859009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.102 [2024-07-20 18:03:07.859024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.102 [2024-07-20 18:03:07.859039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.102 [2024-07-20 18:03:07.859055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.102 [2024-07-20 18:03:07.859069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.102 [2024-07-20 18:03:07.859086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.102 [2024-07-20 18:03:07.859100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.102 [2024-07-20 18:03:07.859115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.102 [2024-07-20 18:03:07.859129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.102 [2024-07-20 18:03:07.859145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.102 [2024-07-20 18:03:07.859159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.102 [2024-07-20 18:03:07.859175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.102 [2024-07-20 18:03:07.859192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.102 [2024-07-20 18:03:07.859209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.102 [2024-07-20 18:03:07.859223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.102 [2024-07-20 18:03:07.859238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.102 [2024-07-20 18:03:07.859252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.859978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.859992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.103 [2024-07-20 18:03:07.860598] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2896ce0 was disconnected and freed. reset controller. 00:27:33.103 [2024-07-20 18:03:07.860661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.103 [2024-07-20 18:03:07.860681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.860701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.860716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.860732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.860746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.860761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.860782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.860804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.860820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.860845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.860859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.860883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.860898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.860914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.860930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.860946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.860961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.860977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.860991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.861980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.104 [2024-07-20 18:03:07.861996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.104 [2024-07-20 18:03:07.862010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.862661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.862742] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2a42470 was disconnected and freed. reset controller. 00:27:33.105 [2024-07-20 18:03:07.864143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.105 [2024-07-20 18:03:07.864786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.105 [2024-07-20 18:03:07.864809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.864826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.864841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.864856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.864870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.864886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.864900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.864915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.864930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.864946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.864960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.864976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.864990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.106 [2024-07-20 18:03:07.865914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.106 [2024-07-20 18:03:07.865930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.865944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.865959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.865973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.865989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.866003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.866019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.866033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.866048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.866062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.866078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.866093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.866108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.866122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.866137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2897b70 is same with the state(5) to be set 00:27:33.107 [2024-07-20 18:03:07.866211] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2897b70 was disconnected and freed. reset controller. 00:27:33.107 [2024-07-20 18:03:07.866351] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:33.107 [2024-07-20 18:03:07.870323] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:33.107 [2024-07-20 18:03:07.870654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.107 [2024-07-20 18:03:07.870689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a73c60 with addr=10.0.0.2, port=4420 00:27:33.107 [2024-07-20 18:03:07.870715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a73c60 is same with the state(5) to be set 00:27:33.107 [2024-07-20 18:03:07.870941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.107 [2024-07-20 18:03:07.870967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28eaf50 with addr=10.0.0.2, port=4420 00:27:33.107 [2024-07-20 18:03:07.870983] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28eaf50 is same with the state(5) to be set 00:27:33.107 [2024-07-20 18:03:07.871009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a73e40 (9): Bad file descriptor 00:27:33.107 [2024-07-20 18:03:07.871049] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.107 [2024-07-20 18:03:07.871084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a6bfd0 (9): Bad file descriptor 00:27:33.107 [2024-07-20 18:03:07.871109] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bb610 (9): Bad file descriptor 00:27:33.107 [2024-07-20 18:03:07.871771] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:33.107 [2024-07-20 18:03:07.871819] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:33.107 [2024-07-20 18:03:07.872093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.107 [2024-07-20 18:03:07.872120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28cca60 with addr=10.0.0.2, port=4420 00:27:33.107 [2024-07-20 18:03:07.872136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28cca60 is same with the state(5) to be set 00:27:33.107 [2024-07-20 18:03:07.872155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a73c60 (9): Bad file descriptor 00:27:33.107 [2024-07-20 18:03:07.872175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28eaf50 (9): Bad file descriptor 00:27:33.107 [2024-07-20 18:03:07.872266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.872984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.872997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.873013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.873027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.873042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.873056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.873071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.873085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.107 [2024-07-20 18:03:07.873101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.107 [2024-07-20 18:03:07.873115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.873973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.873989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.874003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.874027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.874042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.874058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.874072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.874088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.874102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.874118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.874132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.874147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.874161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.874177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.874191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.874207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.874221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.874237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.874251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.875517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.875541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.875561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.875577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.875593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.875608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.875624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.875638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.108 [2024-07-20 18:03:07.875655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.108 [2024-07-20 18:03:07.875674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.875690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.875705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.875721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.875735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.875751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.875765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.875781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.875801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.875819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.875833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.875849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.875863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.875879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.875893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.875908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.875923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.875938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.875952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.875968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.875982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.875998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.109 [2024-07-20 18:03:07.876949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.109 [2024-07-20 18:03:07.876964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.876978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.876993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.110 [2024-07-20 18:03:07.877444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.110 [2024-07-20 18:03:07.877458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.879981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.879998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.880012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.880028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.880043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.880059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.880073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.880089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.880103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.880119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.880133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.880150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.880168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.880184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.372 [2024-07-20 18:03:07.880198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.372 [2024-07-20 18:03:07.880214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.880973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.880989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.881003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.881019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.881033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.881049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.881063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.881079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.881093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.881109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.881123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.881138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.881152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.881167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.881181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.881197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.881211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.881226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.373 [2024-07-20 18:03:07.881240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.373 [2024-07-20 18:03:07.882593] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.373 [2024-07-20 18:03:07.882963] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.373 [2024-07-20 18:03:07.883016] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.373 [2024-07-20 18:03:07.883044] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:33.373 [2024-07-20 18:03:07.883062] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:33.373 [2024-07-20 18:03:07.883384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.373 [2024-07-20 18:03:07.883414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28c4700 with addr=10.0.0.2, port=4420 00:27:33.373 [2024-07-20 18:03:07.883430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28c4700 is same with the state(5) to be set 00:27:33.373 [2024-07-20 18:03:07.883682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.373 [2024-07-20 18:03:07.883707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a6bfd0 with addr=10.0.0.2, port=4420 00:27:33.373 [2024-07-20 18:03:07.883722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a6bfd0 is same with the state(5) to be set 00:27:33.373 [2024-07-20 18:03:07.883745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28cca60 (9): Bad file descriptor 00:27:33.373 [2024-07-20 18:03:07.883764] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:33.373 [2024-07-20 18:03:07.883781] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:33.373 [2024-07-20 18:03:07.883803] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:33.373 [2024-07-20 18:03:07.883830] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:33.373 [2024-07-20 18:03:07.883844] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:33.373 [2024-07-20 18:03:07.883857] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:33.373 [2024-07-20 18:03:07.883905] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.373 [2024-07-20 18:03:07.883930] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.373 [2024-07-20 18:03:07.883973] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.374 [2024-07-20 18:03:07.883995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a6bfd0 (9): Bad file descriptor 00:27:33.374 [2024-07-20 18:03:07.884019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28c4700 (9): Bad file descriptor 00:27:33.374 [2024-07-20 18:03:07.884169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.374 [2024-07-20 18:03:07.884192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.374 [2024-07-20 18:03:07.884417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.374 [2024-07-20 18:03:07.884443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248cdf0 with addr=10.0.0.2, port=4420 00:27:33.374 [2024-07-20 18:03:07.884459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248cdf0 is same with the state(5) to be set 00:27:33.374 [2024-07-20 18:03:07.884670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.374 [2024-07-20 18:03:07.884694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28c5400 with addr=10.0.0.2, port=4420 00:27:33.374 [2024-07-20 18:03:07.884709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28c5400 is same with the state(5) to be set 00:27:33.374 [2024-07-20 18:03:07.884962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.374 [2024-07-20 18:03:07.884988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28f0770 with addr=10.0.0.2, port=4420 00:27:33.374 [2024-07-20 18:03:07.885003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28f0770 is same with the state(5) to be set 00:27:33.374 [2024-07-20 18:03:07.885021] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:33.374 [2024-07-20 18:03:07.885039] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:33.374 [2024-07-20 18:03:07.885053] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:33.374 [2024-07-20 18:03:07.885923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.885948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.885973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.885988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.886974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.886988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.887004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.887017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.887033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.374 [2024-07-20 18:03:07.887047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.374 [2024-07-20 18:03:07.887064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.887942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.887957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a462e0 is same with the state(5) to be set 00:27:33.375 [2024-07-20 18:03:07.889229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.889252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.889273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.889289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.889305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.889319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.889335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.889349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.889365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.889379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.889395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.889409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.889424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.889438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.889454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.889468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.889484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.889498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.889514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.889528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.889549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.889563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.889579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.889603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.889619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.375 [2024-07-20 18:03:07.889633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.375 [2024-07-20 18:03:07.889650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.889665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.889681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.889695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.889711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.889725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.889741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.889755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.889771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.889785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.889810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.889826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.889841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.889856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.889872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.889886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.889902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.889917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.889932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.889952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.889969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.889984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.890000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.890014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.890030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.890044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.890060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.890073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.890089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.890104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.890120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.890134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.890150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.898825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.898900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.898916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.898934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.898950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.898966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.898981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.898998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.376 [2024-07-20 18:03:07.899445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.376 [2024-07-20 18:03:07.899464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.377 [2024-07-20 18:03:07.899979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.377 [2024-07-20 18:03:07.899994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28a0f70 is same with the state(5) to be set 00:27:33.377 [2024-07-20 18:03:07.901704] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.377 [2024-07-20 18:03:07.901734] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:33.377 task offset: 19584 on job bdev=Nvme10n1 fails 00:27:33.377 00:27:33.377 Latency(us) 00:27:33.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.377 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.377 Job: Nvme1n1 ended in about 0.89 seconds with error 00:27:33.377 Verification LBA range: start 0x0 length 0x400 00:27:33.377 Nvme1n1 : 0.89 143.89 8.99 71.95 0.00 293166.40 41166.32 265639.25 00:27:33.377 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.377 Job: Nvme2n1 ended in about 0.89 seconds with error 00:27:33.377 Verification LBA range: start 0x0 length 0x400 00:27:33.377 Nvme2n1 : 0.89 143.38 8.96 71.69 0.00 288173.57 23010.42 253211.69 00:27:33.377 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.377 Job: Nvme3n1 ended in about 0.88 seconds with error 00:27:33.377 Verification LBA range: start 0x0 length 0x400 00:27:33.377 Nvme3n1 : 0.88 145.15 9.07 72.57 0.00 278481.98 27573.67 281173.71 00:27:33.377 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.377 Job: Nvme4n1 ended in about 0.88 seconds with error 00:27:33.377 Verification LBA range: start 0x0 length 0x400 00:27:33.377 Nvme4n1 : 0.88 217.46 13.59 72.49 0.00 204557.65 23787.14 256318.58 00:27:33.377 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.377 Job: Nvme5n1 ended in about 0.88 seconds with error 00:27:33.377 Verification LBA range: start 0x0 length 0x400 00:27:33.377 Nvme5n1 : 0.88 145.77 9.11 72.89 0.00 265235.22 24855.13 281173.71 00:27:33.377 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.377 Job: Nvme6n1 ended in about 0.90 seconds with error 00:27:33.377 Verification LBA range: start 0x0 length 0x400 00:27:33.377 Nvme6n1 : 0.90 214.17 13.39 71.39 0.00 198957.70 22622.06 211268.65 00:27:33.377 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.377 Job: Nvme7n1 ended in about 0.90 seconds with error 00:27:33.377 Verification LBA range: start 0x0 length 0x400 00:27:33.377 Nvme7n1 : 0.90 70.86 4.43 70.86 0.00 392686.55 67963.26 321563.31 00:27:33.377 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.377 Job: Nvme8n1 ended in about 0.88 seconds with error 00:27:33.377 Verification LBA range: start 0x0 length 0x400 00:27:33.377 Nvme8n1 : 0.88 217.17 13.57 72.39 0.00 187076.84 13107.20 237677.23 00:27:33.377 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.377 Job: Nvme9n1 ended in about 0.92 seconds with error 00:27:33.377 Verification LBA range: start 0x0 length 0x400 00:27:33.377 Nvme9n1 : 0.92 139.85 8.74 69.93 0.00 254060.97 23592.96 236123.78 00:27:33.377 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.377 Job: Nvme10n1 ended in about 0.87 seconds with error 00:27:33.377 Verification LBA range: start 0x0 length 0x400 00:27:33.377 Nvme10n1 : 0.87 146.86 9.18 73.43 0.00 233759.48 22719.15 287387.50 00:27:33.377 =================================================================================================================== 00:27:33.377 Total : 1584.55 99.03 719.57 0.00 249574.21 13107.20 321563.31 00:27:33.377 [2024-07-20 18:03:07.928739] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:33.377 [2024-07-20 18:03:07.928835] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:33.377 [2024-07-20 18:03:07.928921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248cdf0 (9): Bad file descriptor 00:27:33.377 [2024-07-20 18:03:07.928950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28c5400 (9): Bad file descriptor 00:27:33.377 [2024-07-20 18:03:07.928969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28f0770 (9): Bad file descriptor 00:27:33.377 [2024-07-20 18:03:07.928986] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:33.377 [2024-07-20 18:03:07.929001] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:33.377 [2024-07-20 18:03:07.929018] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:33.377 [2024-07-20 18:03:07.929044] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:33.377 [2024-07-20 18:03:07.929058] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:33.377 [2024-07-20 18:03:07.929071] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:33.377 [2024-07-20 18:03:07.929161] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.377 [2024-07-20 18:03:07.929191] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.377 [2024-07-20 18:03:07.929211] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.377 [2024-07-20 18:03:07.929229] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.377 [2024-07-20 18:03:07.929248] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.377 [2024-07-20 18:03:07.929376] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.377 [2024-07-20 18:03:07.929398] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.377 [2024-07-20 18:03:07.929820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.377 [2024-07-20 18:03:07.929855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bb610 with addr=10.0.0.2, port=4420 00:27:33.377 [2024-07-20 18:03:07.929875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bb610 is same with the state(5) to be set 00:27:33.377 [2024-07-20 18:03:07.930125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.377 [2024-07-20 18:03:07.930151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a73e40 with addr=10.0.0.2, port=4420 00:27:33.377 [2024-07-20 18:03:07.930178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a73e40 is same with the state(5) to be set 00:27:33.377 [2024-07-20 18:03:07.930194] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.377 [2024-07-20 18:03:07.930207] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.377 [2024-07-20 18:03:07.930220] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.377 [2024-07-20 18:03:07.930238] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:33.378 [2024-07-20 18:03:07.930252] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:33.378 [2024-07-20 18:03:07.930265] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:33.378 [2024-07-20 18:03:07.930281] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:33.378 [2024-07-20 18:03:07.930295] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:33.378 [2024-07-20 18:03:07.930307] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:33.378 [2024-07-20 18:03:07.930342] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.378 [2024-07-20 18:03:07.930364] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.378 [2024-07-20 18:03:07.930382] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.378 [2024-07-20 18:03:07.930411] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.378 [2024-07-20 18:03:07.930429] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.378 [2024-07-20 18:03:07.930446] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.378 [2024-07-20 18:03:07.931041] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:33.378 [2024-07-20 18:03:07.931068] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:33.378 [2024-07-20 18:03:07.931093] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:33.378 [2024-07-20 18:03:07.931129] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.378 [2024-07-20 18:03:07.931145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.378 [2024-07-20 18:03:07.931157] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.378 [2024-07-20 18:03:07.931196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bb610 (9): Bad file descriptor 00:27:33.378 [2024-07-20 18:03:07.931218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a73e40 (9): Bad file descriptor 00:27:33.378 [2024-07-20 18:03:07.931284] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:33.378 [2024-07-20 18:03:07.931512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.378 [2024-07-20 18:03:07.931541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28eaf50 with addr=10.0.0.2, port=4420 00:27:33.378 [2024-07-20 18:03:07.931557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28eaf50 is same with the state(5) to be set 00:27:33.378 [2024-07-20 18:03:07.931769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.378 [2024-07-20 18:03:07.931804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a73c60 with addr=10.0.0.2, port=4420 00:27:33.378 [2024-07-20 18:03:07.931822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a73c60 is same with the state(5) to be set 00:27:33.378 [2024-07-20 18:03:07.932038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.378 [2024-07-20 18:03:07.932067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28cca60 with addr=10.0.0.2, port=4420 00:27:33.378 [2024-07-20 18:03:07.932084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28cca60 is same with the state(5) to be set 00:27:33.378 [2024-07-20 18:03:07.932099] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:33.378 [2024-07-20 18:03:07.932111] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:33.378 [2024-07-20 18:03:07.932124] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:33.378 [2024-07-20 18:03:07.932142] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:33.378 [2024-07-20 18:03:07.932156] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:33.378 [2024-07-20 18:03:07.932169] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:33.378 [2024-07-20 18:03:07.932214] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:33.378 [2024-07-20 18:03:07.932246] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.378 [2024-07-20 18:03:07.932262] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.378 [2024-07-20 18:03:07.932481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.378 [2024-07-20 18:03:07.932507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a6bfd0 with addr=10.0.0.2, port=4420 00:27:33.378 [2024-07-20 18:03:07.932522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a6bfd0 is same with the state(5) to be set 00:27:33.378 [2024-07-20 18:03:07.932541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28eaf50 (9): Bad file descriptor 00:27:33.378 [2024-07-20 18:03:07.932560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a73c60 (9): Bad file descriptor 00:27:33.378 [2024-07-20 18:03:07.932578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28cca60 (9): Bad file descriptor 00:27:33.378 [2024-07-20 18:03:07.932822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.378 [2024-07-20 18:03:07.932850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28c4700 with addr=10.0.0.2, port=4420 00:27:33.378 [2024-07-20 18:03:07.932867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28c4700 is same with the state(5) to be set 00:27:33.378 [2024-07-20 18:03:07.932885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a6bfd0 (9): Bad file descriptor 00:27:33.378 [2024-07-20 18:03:07.932901] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:33.378 [2024-07-20 18:03:07.932915] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:33.378 [2024-07-20 18:03:07.932928] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:33.378 [2024-07-20 18:03:07.932945] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:33.378 [2024-07-20 18:03:07.932959] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:33.378 [2024-07-20 18:03:07.932971] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:33.378 [2024-07-20 18:03:07.932986] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:33.378 [2024-07-20 18:03:07.933004] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:33.378 [2024-07-20 18:03:07.933018] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:33.378 [2024-07-20 18:03:07.933056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.378 [2024-07-20 18:03:07.933073] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.378 [2024-07-20 18:03:07.933095] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.378 [2024-07-20 18:03:07.933110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28c4700 (9): Bad file descriptor 00:27:33.378 [2024-07-20 18:03:07.933126] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:33.378 [2024-07-20 18:03:07.933140] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:33.378 [2024-07-20 18:03:07.933152] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:33.378 [2024-07-20 18:03:07.933194] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.378 [2024-07-20 18:03:07.933212] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:33.378 [2024-07-20 18:03:07.933224] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:33.378 [2024-07-20 18:03:07.933237] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:33.378 [2024-07-20 18:03:07.933272] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.636 18:03:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:33.636 18:03:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:34.569 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1031931 00:27:34.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1031931) - No such process 00:27:34.569 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:34.569 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:34.569 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:34.569 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:34.569 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:34.569 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:34.569 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:34.569 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:34.569 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:34.569 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:34.569 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:34.569 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:34.569 rmmod nvme_tcp 00:27:34.826 rmmod nvme_fabrics 00:27:34.826 rmmod nvme_keyring 00:27:34.826 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:34.826 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:34.826 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:34.826 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:34.826 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:34.826 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:34.826 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:34.826 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:34.826 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:34.826 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.826 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:34.826 18:03:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.726 18:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:36.726 00:27:36.726 real 0m7.572s 00:27:36.726 user 0m18.548s 00:27:36.726 sys 0m1.527s 00:27:36.726 18:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:36.726 18:03:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.726 ************************************ 00:27:36.726 END TEST nvmf_shutdown_tc3 00:27:36.726 ************************************ 00:27:36.726 18:03:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:36.726 00:27:36.726 real 0m27.545s 00:27:36.726 user 1m17.003s 00:27:36.726 sys 0m6.535s 00:27:36.726 18:03:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:36.726 18:03:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:36.726 ************************************ 00:27:36.726 END TEST nvmf_shutdown 00:27:36.726 ************************************ 00:27:36.726 18:03:11 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:36.726 18:03:11 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.726 18:03:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.726 18:03:11 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:36.726 18:03:11 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:36.726 18:03:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.726 18:03:11 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:36.726 18:03:11 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:36.726 18:03:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:36.726 18:03:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:36.726 18:03:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.985 ************************************ 00:27:36.985 START TEST nvmf_multicontroller 00:27:36.985 ************************************ 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:36.985 * Looking for test storage... 00:27:36.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:36.985 18:03:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:38.884 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:38.884 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:38.884 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:38.884 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.884 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:38.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:27:38.885 00:27:38.885 --- 10.0.0.2 ping statistics --- 00:27:38.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.885 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:27:38.885 00:27:38.885 --- 10.0.0.1 ping statistics --- 00:27:38.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.885 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1034957 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1034957 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1034957 ']' 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:38.885 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:38.885 [2024-07-20 18:03:13.669919] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:38.885 [2024-07-20 18:03:13.670006] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.142 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.142 [2024-07-20 18:03:13.735428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:39.142 [2024-07-20 18:03:13.820344] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.142 [2024-07-20 18:03:13.820412] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.142 [2024-07-20 18:03:13.820441] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.142 [2024-07-20 18:03:13.820453] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.142 [2024-07-20 18:03:13.820462] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.142 [2024-07-20 18:03:13.820556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.142 [2024-07-20 18:03:13.820582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.142 [2024-07-20 18:03:13.820584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.142 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:39.142 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:39.142 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:39.142 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.142 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.400 18:03:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.400 18:03:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:39.400 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.400 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.400 [2024-07-20 18:03:13.964810] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.400 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.400 18:03:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:39.400 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.400 18:03:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.400 Malloc0 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.400 [2024-07-20 18:03:14.029334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.400 [2024-07-20 18:03:14.037222] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.400 Malloc1 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.400 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1035051 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1035051 /var/tmp/bdevperf.sock 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1035051 ']' 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:39.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:39.401 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.659 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:39.659 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:39.659 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:39.659 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.659 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.916 NVMe0n1 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.916 1 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.916 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.916 request: 00:27:39.916 { 00:27:39.916 "name": "NVMe0", 00:27:39.916 "trtype": "tcp", 00:27:39.916 "traddr": "10.0.0.2", 00:27:39.916 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:39.916 "hostaddr": "10.0.0.2", 00:27:39.916 "hostsvcid": "60000", 00:27:39.916 "adrfam": "ipv4", 00:27:39.916 "trsvcid": "4420", 00:27:39.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:39.917 "method": "bdev_nvme_attach_controller", 00:27:39.917 "req_id": 1 00:27:39.917 } 00:27:39.917 Got JSON-RPC error response 00:27:39.917 response: 00:27:39.917 { 00:27:39.917 "code": -114, 00:27:39.917 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:39.917 } 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.917 request: 00:27:39.917 { 00:27:39.917 "name": "NVMe0", 00:27:39.917 "trtype": "tcp", 00:27:39.917 "traddr": "10.0.0.2", 00:27:39.917 "hostaddr": "10.0.0.2", 00:27:39.917 "hostsvcid": "60000", 00:27:39.917 "adrfam": "ipv4", 00:27:39.917 "trsvcid": "4420", 00:27:39.917 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:39.917 "method": "bdev_nvme_attach_controller", 00:27:39.917 "req_id": 1 00:27:39.917 } 00:27:39.917 Got JSON-RPC error response 00:27:39.917 response: 00:27:39.917 { 00:27:39.917 "code": -114, 00:27:39.917 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:39.917 } 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.917 request: 00:27:39.917 { 00:27:39.917 "name": "NVMe0", 00:27:39.917 "trtype": "tcp", 00:27:39.917 "traddr": "10.0.0.2", 00:27:39.917 "hostaddr": "10.0.0.2", 00:27:39.917 "hostsvcid": "60000", 00:27:39.917 "adrfam": "ipv4", 00:27:39.917 "trsvcid": "4420", 00:27:39.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:39.917 "multipath": "disable", 00:27:39.917 "method": "bdev_nvme_attach_controller", 00:27:39.917 "req_id": 1 00:27:39.917 } 00:27:39.917 Got JSON-RPC error response 00:27:39.917 response: 00:27:39.917 { 00:27:39.917 "code": -114, 00:27:39.917 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:39.917 } 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.917 request: 00:27:39.917 { 00:27:39.917 "name": "NVMe0", 00:27:39.917 "trtype": "tcp", 00:27:39.917 "traddr": "10.0.0.2", 00:27:39.917 "hostaddr": "10.0.0.2", 00:27:39.917 "hostsvcid": "60000", 00:27:39.917 "adrfam": "ipv4", 00:27:39.917 "trsvcid": "4420", 00:27:39.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:39.917 "multipath": "failover", 00:27:39.917 "method": "bdev_nvme_attach_controller", 00:27:39.917 "req_id": 1 00:27:39.917 } 00:27:39.917 Got JSON-RPC error response 00:27:39.917 response: 00:27:39.917 { 00:27:39.917 "code": -114, 00:27:39.917 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:39.917 } 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.917 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.917 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.174 00:27:40.175 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.175 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:40.175 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.175 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:40.175 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.175 18:03:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.175 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:40.175 18:03:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:41.547 0 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1035051 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1035051 ']' 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1035051 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1035051 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1035051' 00:27:41.547 killing process with pid 1035051 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1035051 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1035051 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:27:41.547 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:41.547 [2024-07-20 18:03:14.143351] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:41.547 [2024-07-20 18:03:14.143441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035051 ] 00:27:41.547 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.547 [2024-07-20 18:03:14.205177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.547 [2024-07-20 18:03:14.293293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.547 [2024-07-20 18:03:14.858428] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 18b05a17-4808-427d-8b69-c62df1bca305 already exists 00:27:41.547 [2024-07-20 18:03:14.858471] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:18b05a17-4808-427d-8b69-c62df1bca305 alias for bdev NVMe1n1 00:27:41.547 [2024-07-20 18:03:14.858489] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:41.547 Running I/O for 1 seconds... 00:27:41.547 00:27:41.547 Latency(us) 00:27:41.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.547 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:41.547 NVMe0n1 : 1.01 18285.80 71.43 0.00 0.00 6980.97 4587.52 21942.42 00:27:41.547 =================================================================================================================== 00:27:41.547 Total : 18285.80 71.43 0.00 0.00 6980.97 4587.52 21942.42 00:27:41.547 Received shutdown signal, test time was about 1.000000 seconds 00:27:41.547 00:27:41.547 Latency(us) 00:27:41.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.547 =================================================================================================================== 00:27:41.547 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:41.547 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.547 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:41.548 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.548 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.548 rmmod nvme_tcp 00:27:41.548 rmmod nvme_fabrics 00:27:41.548 rmmod nvme_keyring 00:27:41.548 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.548 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:41.548 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:41.548 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1034957 ']' 00:27:41.548 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1034957 00:27:41.548 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1034957 ']' 00:27:41.548 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1034957 00:27:41.548 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:27:41.548 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:41.548 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1034957 00:27:41.806 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:41.806 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:41.806 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1034957' 00:27:41.806 killing process with pid 1034957 00:27:41.806 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1034957 00:27:41.806 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1034957 00:27:42.063 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:42.063 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:42.063 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:42.063 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.063 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:42.063 18:03:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.063 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.063 18:03:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.961 18:03:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:43.961 00:27:43.961 real 0m7.157s 00:27:43.961 user 0m11.168s 00:27:43.961 sys 0m2.195s 00:27:43.961 18:03:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:43.961 18:03:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:43.961 ************************************ 00:27:43.961 END TEST nvmf_multicontroller 00:27:43.961 ************************************ 00:27:43.961 18:03:18 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:43.961 18:03:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:43.961 18:03:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:43.961 18:03:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:43.961 ************************************ 00:27:43.961 START TEST nvmf_aer 00:27:43.961 ************************************ 00:27:43.961 18:03:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:44.219 * Looking for test storage... 00:27:44.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.219 18:03:18 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.220 18:03:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.115 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:46.116 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:46.116 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:46.116 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:46.116 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:46.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:27:46.116 00:27:46.116 --- 10.0.0.2 ping statistics --- 00:27:46.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.116 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:27:46.116 00:27:46.116 --- 10.0.0.1 ping statistics --- 00:27:46.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.116 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1037186 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1037186 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 1037186 ']' 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:46.116 18:03:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.116 [2024-07-20 18:03:20.904199] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:46.116 [2024-07-20 18:03:20.904274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.373 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.373 [2024-07-20 18:03:20.970485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:46.374 [2024-07-20 18:03:21.058911] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.374 [2024-07-20 18:03:21.058971] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.374 [2024-07-20 18:03:21.059004] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.374 [2024-07-20 18:03:21.059016] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.374 [2024-07-20 18:03:21.059026] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.374 [2024-07-20 18:03:21.059077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.374 [2024-07-20 18:03:21.059139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:46.374 [2024-07-20 18:03:21.059169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:46.374 [2024-07-20 18:03:21.059170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.630 [2024-07-20 18:03:21.214598] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.630 Malloc0 00:27:46.630 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.631 [2024-07-20 18:03:21.268051] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.631 [ 00:27:46.631 { 00:27:46.631 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:46.631 "subtype": "Discovery", 00:27:46.631 "listen_addresses": [], 00:27:46.631 "allow_any_host": true, 00:27:46.631 "hosts": [] 00:27:46.631 }, 00:27:46.631 { 00:27:46.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.631 "subtype": "NVMe", 00:27:46.631 "listen_addresses": [ 00:27:46.631 { 00:27:46.631 "trtype": "TCP", 00:27:46.631 "adrfam": "IPv4", 00:27:46.631 "traddr": "10.0.0.2", 00:27:46.631 "trsvcid": "4420" 00:27:46.631 } 00:27:46.631 ], 00:27:46.631 "allow_any_host": true, 00:27:46.631 "hosts": [], 00:27:46.631 "serial_number": "SPDK00000000000001", 00:27:46.631 "model_number": "SPDK bdev Controller", 00:27:46.631 "max_namespaces": 2, 00:27:46.631 "min_cntlid": 1, 00:27:46.631 "max_cntlid": 65519, 00:27:46.631 "namespaces": [ 00:27:46.631 { 00:27:46.631 "nsid": 1, 00:27:46.631 "bdev_name": "Malloc0", 00:27:46.631 "name": "Malloc0", 00:27:46.631 "nguid": "F481009663A3440387C266D717579FD4", 00:27:46.631 "uuid": "f4810096-63a3-4403-87c2-66d717579fd4" 00:27:46.631 } 00:27:46.631 ] 00:27:46.631 } 00:27:46.631 ] 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1037334 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:27:46.631 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:27:46.631 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.888 Malloc1 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.888 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.888 Asynchronous Event Request test 00:27:46.888 Attaching to 10.0.0.2 00:27:46.888 Attached to 10.0.0.2 00:27:46.888 Registering asynchronous event callbacks... 00:27:46.888 Starting namespace attribute notice tests for all controllers... 00:27:46.888 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:46.888 aer_cb - Changed Namespace 00:27:46.888 Cleaning up... 00:27:46.888 [ 00:27:46.888 { 00:27:46.888 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:46.888 "subtype": "Discovery", 00:27:46.888 "listen_addresses": [], 00:27:46.888 "allow_any_host": true, 00:27:46.888 "hosts": [] 00:27:46.888 }, 00:27:46.888 { 00:27:46.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.888 "subtype": "NVMe", 00:27:46.888 "listen_addresses": [ 00:27:46.888 { 00:27:46.888 "trtype": "TCP", 00:27:46.888 "adrfam": "IPv4", 00:27:46.888 "traddr": "10.0.0.2", 00:27:46.888 "trsvcid": "4420" 00:27:46.888 } 00:27:46.888 ], 00:27:46.888 "allow_any_host": true, 00:27:46.888 "hosts": [], 00:27:46.888 "serial_number": "SPDK00000000000001", 00:27:46.888 "model_number": "SPDK bdev Controller", 00:27:46.888 "max_namespaces": 2, 00:27:46.888 "min_cntlid": 1, 00:27:46.889 "max_cntlid": 65519, 00:27:46.889 "namespaces": [ 00:27:46.889 { 00:27:46.889 "nsid": 1, 00:27:46.889 "bdev_name": "Malloc0", 00:27:46.889 "name": "Malloc0", 00:27:46.889 "nguid": "F481009663A3440387C266D717579FD4", 00:27:46.889 "uuid": "f4810096-63a3-4403-87c2-66d717579fd4" 00:27:46.889 }, 00:27:46.889 { 00:27:46.889 "nsid": 2, 00:27:46.889 "bdev_name": "Malloc1", 00:27:46.889 "name": "Malloc1", 00:27:46.889 "nguid": "44997612B6684544B897B236F0C221C6", 00:27:46.889 "uuid": "44997612-b668-4544-b897-b236f0c221c6" 00:27:46.889 } 00:27:46.889 ] 00:27:46.889 } 00:27:46.889 ] 00:27:46.889 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.889 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1037334 00:27:46.889 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:46.889 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.889 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.146 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.146 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:47.146 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.146 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.146 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.146 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:47.146 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:47.147 rmmod nvme_tcp 00:27:47.147 rmmod nvme_fabrics 00:27:47.147 rmmod nvme_keyring 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1037186 ']' 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1037186 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 1037186 ']' 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 1037186 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1037186 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1037186' 00:27:47.147 killing process with pid 1037186 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 1037186 00:27:47.147 18:03:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 1037186 00:27:47.405 18:03:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:47.405 18:03:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:47.405 18:03:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:47.405 18:03:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:47.405 18:03:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:47.405 18:03:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.405 18:03:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.405 18:03:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.303 18:03:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:49.303 00:27:49.303 real 0m5.314s 00:27:49.303 user 0m4.399s 00:27:49.303 sys 0m1.816s 00:27:49.303 18:03:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:49.303 18:03:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:49.303 ************************************ 00:27:49.303 END TEST nvmf_aer 00:27:49.303 ************************************ 00:27:49.303 18:03:24 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:49.303 18:03:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:49.303 18:03:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:49.303 18:03:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:49.561 ************************************ 00:27:49.561 START TEST nvmf_async_init 00:27:49.561 ************************************ 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:49.561 * Looking for test storage... 00:27:49.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ff6107d2a5944e5a8c698b4fd58e5285 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:49.561 18:03:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:51.510 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:51.510 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.510 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:51.511 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:51.511 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:51.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:27:51.511 00:27:51.511 --- 10.0.0.2 ping statistics --- 00:27:51.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.511 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:27:51.511 00:27:51.511 --- 10.0.0.1 ping statistics --- 00:27:51.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.511 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1039267 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1039267 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 1039267 ']' 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:51.511 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:51.769 [2024-07-20 18:03:26.333543] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:51.769 [2024-07-20 18:03:26.333610] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.769 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.769 [2024-07-20 18:03:26.401587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.769 [2024-07-20 18:03:26.490957] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.769 [2024-07-20 18:03:26.491021] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.769 [2024-07-20 18:03:26.491038] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.769 [2024-07-20 18:03:26.491051] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.769 [2024-07-20 18:03:26.491063] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.769 [2024-07-20 18:03:26.491095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.027 [2024-07-20 18:03:26.630561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.027 null0 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ff6107d2a5944e5a8c698b4fd58e5285 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.027 [2024-07-20 18:03:26.670855] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.027 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.283 nvme0n1 00:27:52.284 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.284 18:03:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:52.284 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.284 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.284 [ 00:27:52.284 { 00:27:52.284 "name": "nvme0n1", 00:27:52.284 "aliases": [ 00:27:52.284 "ff6107d2-a594-4e5a-8c69-8b4fd58e5285" 00:27:52.284 ], 00:27:52.284 "product_name": "NVMe disk", 00:27:52.284 "block_size": 512, 00:27:52.284 "num_blocks": 2097152, 00:27:52.284 "uuid": "ff6107d2-a594-4e5a-8c69-8b4fd58e5285", 00:27:52.284 "assigned_rate_limits": { 00:27:52.284 "rw_ios_per_sec": 0, 00:27:52.284 "rw_mbytes_per_sec": 0, 00:27:52.284 "r_mbytes_per_sec": 0, 00:27:52.284 "w_mbytes_per_sec": 0 00:27:52.284 }, 00:27:52.284 "claimed": false, 00:27:52.284 "zoned": false, 00:27:52.284 "supported_io_types": { 00:27:52.284 "read": true, 00:27:52.284 "write": true, 00:27:52.284 "unmap": false, 00:27:52.284 "write_zeroes": true, 00:27:52.284 "flush": true, 00:27:52.284 "reset": true, 00:27:52.284 "compare": true, 00:27:52.284 "compare_and_write": true, 00:27:52.284 "abort": true, 00:27:52.284 "nvme_admin": true, 00:27:52.284 "nvme_io": true 00:27:52.284 }, 00:27:52.284 "memory_domains": [ 00:27:52.284 { 00:27:52.284 "dma_device_id": "system", 00:27:52.284 "dma_device_type": 1 00:27:52.284 } 00:27:52.284 ], 00:27:52.284 "driver_specific": { 00:27:52.284 "nvme": [ 00:27:52.284 { 00:27:52.284 "trid": { 00:27:52.284 "trtype": "TCP", 00:27:52.284 "adrfam": "IPv4", 00:27:52.284 "traddr": "10.0.0.2", 00:27:52.284 "trsvcid": "4420", 00:27:52.284 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:52.284 }, 00:27:52.284 "ctrlr_data": { 00:27:52.284 "cntlid": 1, 00:27:52.284 "vendor_id": "0x8086", 00:27:52.284 "model_number": "SPDK bdev Controller", 00:27:52.284 "serial_number": "00000000000000000000", 00:27:52.284 "firmware_revision": "24.05.1", 00:27:52.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.284 "oacs": { 00:27:52.284 "security": 0, 00:27:52.284 "format": 0, 00:27:52.284 "firmware": 0, 00:27:52.284 "ns_manage": 0 00:27:52.284 }, 00:27:52.284 "multi_ctrlr": true, 00:27:52.284 "ana_reporting": false 00:27:52.284 }, 00:27:52.284 "vs": { 00:27:52.284 "nvme_version": "1.3" 00:27:52.284 }, 00:27:52.284 "ns_data": { 00:27:52.284 "id": 1, 00:27:52.284 "can_share": true 00:27:52.284 } 00:27:52.284 } 00:27:52.284 ], 00:27:52.284 "mp_policy": "active_passive" 00:27:52.284 } 00:27:52.284 } 00:27:52.284 ] 00:27:52.284 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.284 18:03:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:52.284 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.284 18:03:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.284 [2024-07-20 18:03:26.923361] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:52.284 [2024-07-20 18:03:26.923451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245db90 (9): Bad file descriptor 00:27:52.284 [2024-07-20 18:03:27.065941] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:52.284 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.284 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:52.284 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.284 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.284 [ 00:27:52.284 { 00:27:52.284 "name": "nvme0n1", 00:27:52.284 "aliases": [ 00:27:52.284 "ff6107d2-a594-4e5a-8c69-8b4fd58e5285" 00:27:52.284 ], 00:27:52.284 "product_name": "NVMe disk", 00:27:52.284 "block_size": 512, 00:27:52.284 "num_blocks": 2097152, 00:27:52.284 "uuid": "ff6107d2-a594-4e5a-8c69-8b4fd58e5285", 00:27:52.284 "assigned_rate_limits": { 00:27:52.284 "rw_ios_per_sec": 0, 00:27:52.284 "rw_mbytes_per_sec": 0, 00:27:52.284 "r_mbytes_per_sec": 0, 00:27:52.284 "w_mbytes_per_sec": 0 00:27:52.284 }, 00:27:52.284 "claimed": false, 00:27:52.284 "zoned": false, 00:27:52.284 "supported_io_types": { 00:27:52.284 "read": true, 00:27:52.284 "write": true, 00:27:52.284 "unmap": false, 00:27:52.284 "write_zeroes": true, 00:27:52.284 "flush": true, 00:27:52.284 "reset": true, 00:27:52.284 "compare": true, 00:27:52.284 "compare_and_write": true, 00:27:52.284 "abort": true, 00:27:52.284 "nvme_admin": true, 00:27:52.284 "nvme_io": true 00:27:52.284 }, 00:27:52.284 "memory_domains": [ 00:27:52.284 { 00:27:52.284 "dma_device_id": "system", 00:27:52.284 "dma_device_type": 1 00:27:52.284 } 00:27:52.284 ], 00:27:52.284 "driver_specific": { 00:27:52.284 "nvme": [ 00:27:52.284 { 00:27:52.284 "trid": { 00:27:52.284 "trtype": "TCP", 00:27:52.284 "adrfam": "IPv4", 00:27:52.541 "traddr": "10.0.0.2", 00:27:52.541 "trsvcid": "4420", 00:27:52.541 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:52.541 }, 00:27:52.541 "ctrlr_data": { 00:27:52.541 "cntlid": 2, 00:27:52.541 "vendor_id": "0x8086", 00:27:52.541 "model_number": "SPDK bdev Controller", 00:27:52.541 "serial_number": "00000000000000000000", 00:27:52.541 "firmware_revision": "24.05.1", 00:27:52.541 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.541 "oacs": { 00:27:52.541 "security": 0, 00:27:52.541 "format": 0, 00:27:52.541 "firmware": 0, 00:27:52.541 "ns_manage": 0 00:27:52.541 }, 00:27:52.541 "multi_ctrlr": true, 00:27:52.541 "ana_reporting": false 00:27:52.541 }, 00:27:52.541 "vs": { 00:27:52.541 "nvme_version": "1.3" 00:27:52.541 }, 00:27:52.541 "ns_data": { 00:27:52.541 "id": 1, 00:27:52.541 "can_share": true 00:27:52.541 } 00:27:52.541 } 00:27:52.541 ], 00:27:52.541 "mp_policy": "active_passive" 00:27:52.541 } 00:27:52.541 } 00:27:52.541 ] 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.trQwVjFSdi 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.trQwVjFSdi 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.541 [2024-07-20 18:03:27.116033] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:52.541 [2024-07-20 18:03:27.116170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.trQwVjFSdi 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.541 [2024-07-20 18:03:27.124052] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.trQwVjFSdi 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.541 [2024-07-20 18:03:27.132065] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:52.541 [2024-07-20 18:03:27.132128] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:52.541 nvme0n1 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.541 [ 00:27:52.541 { 00:27:52.541 "name": "nvme0n1", 00:27:52.541 "aliases": [ 00:27:52.541 "ff6107d2-a594-4e5a-8c69-8b4fd58e5285" 00:27:52.541 ], 00:27:52.541 "product_name": "NVMe disk", 00:27:52.541 "block_size": 512, 00:27:52.541 "num_blocks": 2097152, 00:27:52.541 "uuid": "ff6107d2-a594-4e5a-8c69-8b4fd58e5285", 00:27:52.541 "assigned_rate_limits": { 00:27:52.541 "rw_ios_per_sec": 0, 00:27:52.541 "rw_mbytes_per_sec": 0, 00:27:52.541 "r_mbytes_per_sec": 0, 00:27:52.541 "w_mbytes_per_sec": 0 00:27:52.541 }, 00:27:52.541 "claimed": false, 00:27:52.541 "zoned": false, 00:27:52.541 "supported_io_types": { 00:27:52.541 "read": true, 00:27:52.541 "write": true, 00:27:52.541 "unmap": false, 00:27:52.541 "write_zeroes": true, 00:27:52.541 "flush": true, 00:27:52.541 "reset": true, 00:27:52.541 "compare": true, 00:27:52.541 "compare_and_write": true, 00:27:52.541 "abort": true, 00:27:52.541 "nvme_admin": true, 00:27:52.541 "nvme_io": true 00:27:52.541 }, 00:27:52.541 "memory_domains": [ 00:27:52.541 { 00:27:52.541 "dma_device_id": "system", 00:27:52.541 "dma_device_type": 1 00:27:52.541 } 00:27:52.541 ], 00:27:52.541 "driver_specific": { 00:27:52.541 "nvme": [ 00:27:52.541 { 00:27:52.541 "trid": { 00:27:52.541 "trtype": "TCP", 00:27:52.541 "adrfam": "IPv4", 00:27:52.541 "traddr": "10.0.0.2", 00:27:52.541 "trsvcid": "4421", 00:27:52.541 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:52.541 }, 00:27:52.541 "ctrlr_data": { 00:27:52.541 "cntlid": 3, 00:27:52.541 "vendor_id": "0x8086", 00:27:52.541 "model_number": "SPDK bdev Controller", 00:27:52.541 "serial_number": "00000000000000000000", 00:27:52.541 "firmware_revision": "24.05.1", 00:27:52.541 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.541 "oacs": { 00:27:52.541 "security": 0, 00:27:52.541 "format": 0, 00:27:52.541 "firmware": 0, 00:27:52.541 "ns_manage": 0 00:27:52.541 }, 00:27:52.541 "multi_ctrlr": true, 00:27:52.541 "ana_reporting": false 00:27:52.541 }, 00:27:52.541 "vs": { 00:27:52.541 "nvme_version": "1.3" 00:27:52.541 }, 00:27:52.541 "ns_data": { 00:27:52.541 "id": 1, 00:27:52.541 "can_share": true 00:27:52.541 } 00:27:52.541 } 00:27:52.541 ], 00:27:52.541 "mp_policy": "active_passive" 00:27:52.541 } 00:27:52.541 } 00:27:52.541 ] 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.trQwVjFSdi 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:52.541 rmmod nvme_tcp 00:27:52.541 rmmod nvme_fabrics 00:27:52.541 rmmod nvme_keyring 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1039267 ']' 00:27:52.541 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1039267 00:27:52.542 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 1039267 ']' 00:27:52.542 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 1039267 00:27:52.542 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:27:52.542 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:52.542 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1039267 00:27:52.542 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:52.542 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:52.542 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1039267' 00:27:52.542 killing process with pid 1039267 00:27:52.542 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 1039267 00:27:52.542 [2024-07-20 18:03:27.293769] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:52.542 [2024-07-20 18:03:27.293816] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:52.542 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 1039267 00:27:52.799 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:52.799 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:52.799 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:52.799 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:52.799 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:52.799 18:03:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.799 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:52.799 18:03:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.323 18:03:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:55.323 00:27:55.323 real 0m5.436s 00:27:55.323 user 0m2.011s 00:27:55.323 sys 0m1.838s 00:27:55.323 18:03:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:55.323 18:03:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:55.323 ************************************ 00:27:55.323 END TEST nvmf_async_init 00:27:55.323 ************************************ 00:27:55.323 18:03:29 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:55.323 18:03:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:55.323 18:03:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:55.323 18:03:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.323 ************************************ 00:27:55.323 START TEST dma 00:27:55.323 ************************************ 00:27:55.323 18:03:29 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:55.323 * Looking for test storage... 00:27:55.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:55.323 18:03:29 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.323 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.324 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.324 18:03:29 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.324 18:03:29 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.324 18:03:29 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.324 18:03:29 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.324 18:03:29 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.324 18:03:29 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.324 18:03:29 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:27:55.324 18:03:29 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.324 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:27:55.324 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.324 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.324 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.324 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.324 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.324 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.324 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.324 18:03:29 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.324 18:03:29 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:55.324 18:03:29 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:27:55.324 00:27:55.324 real 0m0.061s 00:27:55.324 user 0m0.029s 00:27:55.324 sys 0m0.037s 00:27:55.324 18:03:29 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:55.324 18:03:29 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:27:55.324 ************************************ 00:27:55.324 END TEST dma 00:27:55.324 ************************************ 00:27:55.324 18:03:29 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:55.324 18:03:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:55.324 18:03:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:55.324 18:03:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.324 ************************************ 00:27:55.324 START TEST nvmf_identify 00:27:55.324 ************************************ 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:55.324 * Looking for test storage... 00:27:55.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:55.324 18:03:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:57.222 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:57.222 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:57.222 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.222 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:57.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:57.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:27:57.223 00:27:57.223 --- 10.0.0.2 ping statistics --- 00:27:57.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.223 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:27:57.223 00:27:57.223 --- 10.0.0.1 ping statistics --- 00:27:57.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.223 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1041389 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1041389 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 1041389 ']' 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:57.223 18:03:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.223 [2024-07-20 18:03:31.959954] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:57.223 [2024-07-20 18:03:31.960031] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.223 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.482 [2024-07-20 18:03:32.031254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:57.482 [2024-07-20 18:03:32.126015] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.482 [2024-07-20 18:03:32.126076] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.482 [2024-07-20 18:03:32.126094] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:57.482 [2024-07-20 18:03:32.126107] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:57.482 [2024-07-20 18:03:32.126119] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.482 [2024-07-20 18:03:32.126202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.482 [2024-07-20 18:03:32.126256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.482 [2024-07-20 18:03:32.126303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:57.482 [2024-07-20 18:03:32.126307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.482 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:57.482 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:27:57.482 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:57.482 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.482 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.482 [2024-07-20 18:03:32.246291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.482 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.482 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:57.482 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:57.482 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.482 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:57.482 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.482 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.741 Malloc0 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.741 [2024-07-20 18:03:32.317645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:57.741 [ 00:27:57.741 { 00:27:57.741 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:57.741 "subtype": "Discovery", 00:27:57.741 "listen_addresses": [ 00:27:57.741 { 00:27:57.741 "trtype": "TCP", 00:27:57.741 "adrfam": "IPv4", 00:27:57.741 "traddr": "10.0.0.2", 00:27:57.741 "trsvcid": "4420" 00:27:57.741 } 00:27:57.741 ], 00:27:57.741 "allow_any_host": true, 00:27:57.741 "hosts": [] 00:27:57.741 }, 00:27:57.741 { 00:27:57.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:57.741 "subtype": "NVMe", 00:27:57.741 "listen_addresses": [ 00:27:57.741 { 00:27:57.741 "trtype": "TCP", 00:27:57.741 "adrfam": "IPv4", 00:27:57.741 "traddr": "10.0.0.2", 00:27:57.741 "trsvcid": "4420" 00:27:57.741 } 00:27:57.741 ], 00:27:57.741 "allow_any_host": true, 00:27:57.741 "hosts": [], 00:27:57.741 "serial_number": "SPDK00000000000001", 00:27:57.741 "model_number": "SPDK bdev Controller", 00:27:57.741 "max_namespaces": 32, 00:27:57.741 "min_cntlid": 1, 00:27:57.741 "max_cntlid": 65519, 00:27:57.741 "namespaces": [ 00:27:57.741 { 00:27:57.741 "nsid": 1, 00:27:57.741 "bdev_name": "Malloc0", 00:27:57.741 "name": "Malloc0", 00:27:57.741 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:57.741 "eui64": "ABCDEF0123456789", 00:27:57.741 "uuid": "50c2ce68-91f2-4bde-87ca-0720e646b4d1" 00:27:57.741 } 00:27:57.741 ] 00:27:57.741 } 00:27:57.741 ] 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.741 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:57.741 [2024-07-20 18:03:32.356514] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:57.741 [2024-07-20 18:03:32.356557] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041454 ] 00:27:57.741 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.741 [2024-07-20 18:03:32.390143] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:57.741 [2024-07-20 18:03:32.390211] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:57.741 [2024-07-20 18:03:32.390220] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:57.741 [2024-07-20 18:03:32.390234] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:57.741 [2024-07-20 18:03:32.390246] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:57.742 [2024-07-20 18:03:32.393859] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:57.742 [2024-07-20 18:03:32.393935] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22a0120 0 00:27:57.742 [2024-07-20 18:03:32.401821] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:57.742 [2024-07-20 18:03:32.401843] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:57.742 [2024-07-20 18:03:32.401852] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:57.742 [2024-07-20 18:03:32.401858] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:57.742 [2024-07-20 18:03:32.401920] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.401933] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.401941] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a0120) 00:27:57.742 [2024-07-20 18:03:32.401958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:57.742 [2024-07-20 18:03:32.401984] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f91f0, cid 0, qid 0 00:27:57.742 [2024-07-20 18:03:32.408823] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.742 [2024-07-20 18:03:32.408844] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.742 [2024-07-20 18:03:32.408852] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.408860] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f91f0) on tqpair=0x22a0120 00:27:57.742 [2024-07-20 18:03:32.408896] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:57.742 [2024-07-20 18:03:32.408910] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:57.742 [2024-07-20 18:03:32.408925] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:57.742 [2024-07-20 18:03:32.408950] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.408959] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.408967] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a0120) 00:27:57.742 [2024-07-20 18:03:32.408978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.742 [2024-07-20 18:03:32.409003] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f91f0, cid 0, qid 0 00:27:57.742 [2024-07-20 18:03:32.409262] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.742 [2024-07-20 18:03:32.409274] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.742 [2024-07-20 18:03:32.409281] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.409288] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f91f0) on tqpair=0x22a0120 00:27:57.742 [2024-07-20 18:03:32.409313] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:57.742 [2024-07-20 18:03:32.409327] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:57.742 [2024-07-20 18:03:32.409340] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.409347] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.409354] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a0120) 00:27:57.742 [2024-07-20 18:03:32.409365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.742 [2024-07-20 18:03:32.409400] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f91f0, cid 0, qid 0 00:27:57.742 [2024-07-20 18:03:32.409652] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.742 [2024-07-20 18:03:32.409665] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.742 [2024-07-20 18:03:32.409672] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.409679] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f91f0) on tqpair=0x22a0120 00:27:57.742 [2024-07-20 18:03:32.409689] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:57.742 [2024-07-20 18:03:32.409703] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:57.742 [2024-07-20 18:03:32.409715] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.409723] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.409730] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a0120) 00:27:57.742 [2024-07-20 18:03:32.409740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.742 [2024-07-20 18:03:32.409761] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f91f0, cid 0, qid 0 00:27:57.742 [2024-07-20 18:03:32.409985] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.742 [2024-07-20 18:03:32.410001] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.742 [2024-07-20 18:03:32.410009] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.410016] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f91f0) on tqpair=0x22a0120 00:27:57.742 [2024-07-20 18:03:32.410026] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:57.742 [2024-07-20 18:03:32.410047] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.410058] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.410065] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a0120) 00:27:57.742 [2024-07-20 18:03:32.410075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.742 [2024-07-20 18:03:32.410097] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f91f0, cid 0, qid 0 00:27:57.742 [2024-07-20 18:03:32.410351] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.742 [2024-07-20 18:03:32.410367] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.742 [2024-07-20 18:03:32.410374] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.410381] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f91f0) on tqpair=0x22a0120 00:27:57.742 [2024-07-20 18:03:32.410390] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:57.742 [2024-07-20 18:03:32.410399] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:57.742 [2024-07-20 18:03:32.410412] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:57.742 [2024-07-20 18:03:32.410523] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:57.742 [2024-07-20 18:03:32.410531] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:57.742 [2024-07-20 18:03:32.410545] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.410553] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.410559] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a0120) 00:27:57.742 [2024-07-20 18:03:32.410570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.742 [2024-07-20 18:03:32.410590] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f91f0, cid 0, qid 0 00:27:57.742 [2024-07-20 18:03:32.410864] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.742 [2024-07-20 18:03:32.410880] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.742 [2024-07-20 18:03:32.410887] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.410894] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f91f0) on tqpair=0x22a0120 00:27:57.742 [2024-07-20 18:03:32.410904] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:57.742 [2024-07-20 18:03:32.410921] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.410930] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.410937] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a0120) 00:27:57.742 [2024-07-20 18:03:32.410948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.742 [2024-07-20 18:03:32.410969] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f91f0, cid 0, qid 0 00:27:57.742 [2024-07-20 18:03:32.411218] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.742 [2024-07-20 18:03:32.411231] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.742 [2024-07-20 18:03:32.411238] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.411244] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f91f0) on tqpair=0x22a0120 00:27:57.742 [2024-07-20 18:03:32.411253] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:57.742 [2024-07-20 18:03:32.411267] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:57.742 [2024-07-20 18:03:32.411281] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:57.742 [2024-07-20 18:03:32.411299] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:57.742 [2024-07-20 18:03:32.411317] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.411326] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a0120) 00:27:57.742 [2024-07-20 18:03:32.411352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.742 [2024-07-20 18:03:32.411373] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f91f0, cid 0, qid 0 00:27:57.742 [2024-07-20 18:03:32.411633] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:57.742 [2024-07-20 18:03:32.411649] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:57.742 [2024-07-20 18:03:32.411656] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.411663] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22a0120): datao=0, datal=4096, cccid=0 00:27:57.742 [2024-07-20 18:03:32.411671] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f91f0) on tqpair(0x22a0120): expected_datao=0, payload_size=4096 00:27:57.742 [2024-07-20 18:03:32.411679] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.411769] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.411784] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.411988] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.742 [2024-07-20 18:03:32.412001] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.742 [2024-07-20 18:03:32.412009] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.412015] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f91f0) on tqpair=0x22a0120 00:27:57.742 [2024-07-20 18:03:32.412034] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:57.742 [2024-07-20 18:03:32.412043] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:57.742 [2024-07-20 18:03:32.412051] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:57.742 [2024-07-20 18:03:32.412059] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:57.742 [2024-07-20 18:03:32.412067] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:57.742 [2024-07-20 18:03:32.412075] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:57.742 [2024-07-20 18:03:32.412090] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:57.742 [2024-07-20 18:03:32.412102] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.412125] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.742 [2024-07-20 18:03:32.412132] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a0120) 00:27:57.742 [2024-07-20 18:03:32.412143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:57.742 [2024-07-20 18:03:32.412165] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f91f0, cid 0, qid 0 00:27:57.742 [2024-07-20 18:03:32.412399] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.742 [2024-07-20 18:03:32.412415] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.743 [2024-07-20 18:03:32.412422] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.412429] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f91f0) on tqpair=0x22a0120 00:27:57.743 [2024-07-20 18:03:32.412442] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.412450] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.412456] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a0120) 00:27:57.743 [2024-07-20 18:03:32.412466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.743 [2024-07-20 18:03:32.412476] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.412483] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.412490] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22a0120) 00:27:57.743 [2024-07-20 18:03:32.412499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.743 [2024-07-20 18:03:32.412508] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.412515] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.412522] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22a0120) 00:27:57.743 [2024-07-20 18:03:32.412530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.743 [2024-07-20 18:03:32.412540] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.412547] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.412553] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.743 [2024-07-20 18:03:32.412578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.743 [2024-07-20 18:03:32.412586] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:57.743 [2024-07-20 18:03:32.412605] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:57.743 [2024-07-20 18:03:32.412618] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.412625] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22a0120) 00:27:57.743 [2024-07-20 18:03:32.412635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.743 [2024-07-20 18:03:32.412657] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f91f0, cid 0, qid 0 00:27:57.743 [2024-07-20 18:03:32.412683] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9350, cid 1, qid 0 00:27:57.743 [2024-07-20 18:03:32.412691] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f94b0, cid 2, qid 0 00:27:57.743 [2024-07-20 18:03:32.412699] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.743 [2024-07-20 18:03:32.412706] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9770, cid 4, qid 0 00:27:57.743 [2024-07-20 18:03:32.416803] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.743 [2024-07-20 18:03:32.416820] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.743 [2024-07-20 18:03:32.416828] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.416834] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9770) on tqpair=0x22a0120 00:27:57.743 [2024-07-20 18:03:32.416848] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:57.743 [2024-07-20 18:03:32.416858] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:57.743 [2024-07-20 18:03:32.416876] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.416901] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22a0120) 00:27:57.743 [2024-07-20 18:03:32.416912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.743 [2024-07-20 18:03:32.416934] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9770, cid 4, qid 0 00:27:57.743 [2024-07-20 18:03:32.417166] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:57.743 [2024-07-20 18:03:32.417181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:57.743 [2024-07-20 18:03:32.417188] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.417195] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22a0120): datao=0, datal=4096, cccid=4 00:27:57.743 [2024-07-20 18:03:32.417203] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f9770) on tqpair(0x22a0120): expected_datao=0, payload_size=4096 00:27:57.743 [2024-07-20 18:03:32.417210] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.417220] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.417228] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.417375] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.743 [2024-07-20 18:03:32.417386] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.743 [2024-07-20 18:03:32.417393] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.417400] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9770) on tqpair=0x22a0120 00:27:57.743 [2024-07-20 18:03:32.417420] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:57.743 [2024-07-20 18:03:32.417455] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.417481] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22a0120) 00:27:57.743 [2024-07-20 18:03:32.417492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.743 [2024-07-20 18:03:32.417503] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.417510] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.417517] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22a0120) 00:27:57.743 [2024-07-20 18:03:32.417526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.743 [2024-07-20 18:03:32.417550] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9770, cid 4, qid 0 00:27:57.743 [2024-07-20 18:03:32.417578] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f98d0, cid 5, qid 0 00:27:57.743 [2024-07-20 18:03:32.417839] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:57.743 [2024-07-20 18:03:32.417855] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:57.743 [2024-07-20 18:03:32.417862] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.417869] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22a0120): datao=0, datal=1024, cccid=4 00:27:57.743 [2024-07-20 18:03:32.417876] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f9770) on tqpair(0x22a0120): expected_datao=0, payload_size=1024 00:27:57.743 [2024-07-20 18:03:32.417884] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.417898] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.417906] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.417915] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.743 [2024-07-20 18:03:32.417924] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.743 [2024-07-20 18:03:32.417930] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.417937] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f98d0) on tqpair=0x22a0120 00:27:57.743 [2024-07-20 18:03:32.459013] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.743 [2024-07-20 18:03:32.459033] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.743 [2024-07-20 18:03:32.459041] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.459048] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9770) on tqpair=0x22a0120 00:27:57.743 [2024-07-20 18:03:32.459072] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.459083] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22a0120) 00:27:57.743 [2024-07-20 18:03:32.459094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.743 [2024-07-20 18:03:32.459124] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9770, cid 4, qid 0 00:27:57.743 [2024-07-20 18:03:32.459342] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:57.743 [2024-07-20 18:03:32.459358] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:57.743 [2024-07-20 18:03:32.459365] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.459371] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22a0120): datao=0, datal=3072, cccid=4 00:27:57.743 [2024-07-20 18:03:32.459379] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f9770) on tqpair(0x22a0120): expected_datao=0, payload_size=3072 00:27:57.743 [2024-07-20 18:03:32.459387] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.459478] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.459491] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.459711] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.743 [2024-07-20 18:03:32.459727] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.743 [2024-07-20 18:03:32.459734] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.459741] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9770) on tqpair=0x22a0120 00:27:57.743 [2024-07-20 18:03:32.459758] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.459767] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22a0120) 00:27:57.743 [2024-07-20 18:03:32.459778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.743 [2024-07-20 18:03:32.459814] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9770, cid 4, qid 0 00:27:57.743 [2024-07-20 18:03:32.460045] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:57.743 [2024-07-20 18:03:32.460060] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:57.743 [2024-07-20 18:03:32.460067] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.460074] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22a0120): datao=0, datal=8, cccid=4 00:27:57.743 [2024-07-20 18:03:32.460081] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f9770) on tqpair(0x22a0120): expected_datao=0, payload_size=8 00:27:57.743 [2024-07-20 18:03:32.460089] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.460098] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.460121] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.504810] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.743 [2024-07-20 18:03:32.504830] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.743 [2024-07-20 18:03:32.504838] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.743 [2024-07-20 18:03:32.504845] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9770) on tqpair=0x22a0120 00:27:57.743 ===================================================== 00:27:57.743 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:57.743 ===================================================== 00:27:57.743 Controller Capabilities/Features 00:27:57.744 ================================ 00:27:57.744 Vendor ID: 0000 00:27:57.744 Subsystem Vendor ID: 0000 00:27:57.744 Serial Number: .................... 00:27:57.744 Model Number: ........................................ 00:27:57.744 Firmware Version: 24.05.1 00:27:57.744 Recommended Arb Burst: 0 00:27:57.744 IEEE OUI Identifier: 00 00 00 00:27:57.744 Multi-path I/O 00:27:57.744 May have multiple subsystem ports: No 00:27:57.744 May have multiple controllers: No 00:27:57.744 Associated with SR-IOV VF: No 00:27:57.744 Max Data Transfer Size: 131072 00:27:57.744 Max Number of Namespaces: 0 00:27:57.744 Max Number of I/O Queues: 1024 00:27:57.744 NVMe Specification Version (VS): 1.3 00:27:57.744 NVMe Specification Version (Identify): 1.3 00:27:57.744 Maximum Queue Entries: 128 00:27:57.744 Contiguous Queues Required: Yes 00:27:57.744 Arbitration Mechanisms Supported 00:27:57.744 Weighted Round Robin: Not Supported 00:27:57.744 Vendor Specific: Not Supported 00:27:57.744 Reset Timeout: 15000 ms 00:27:57.744 Doorbell Stride: 4 bytes 00:27:57.744 NVM Subsystem Reset: Not Supported 00:27:57.744 Command Sets Supported 00:27:57.744 NVM Command Set: Supported 00:27:57.744 Boot Partition: Not Supported 00:27:57.744 Memory Page Size Minimum: 4096 bytes 00:27:57.744 Memory Page Size Maximum: 4096 bytes 00:27:57.744 Persistent Memory Region: Not Supported 00:27:57.744 Optional Asynchronous Events Supported 00:27:57.744 Namespace Attribute Notices: Not Supported 00:27:57.744 Firmware Activation Notices: Not Supported 00:27:57.744 ANA Change Notices: Not Supported 00:27:57.744 PLE Aggregate Log Change Notices: Not Supported 00:27:57.744 LBA Status Info Alert Notices: Not Supported 00:27:57.744 EGE Aggregate Log Change Notices: Not Supported 00:27:57.744 Normal NVM Subsystem Shutdown event: Not Supported 00:27:57.744 Zone Descriptor Change Notices: Not Supported 00:27:57.744 Discovery Log Change Notices: Supported 00:27:57.744 Controller Attributes 00:27:57.744 128-bit Host Identifier: Not Supported 00:27:57.744 Non-Operational Permissive Mode: Not Supported 00:27:57.744 NVM Sets: Not Supported 00:27:57.744 Read Recovery Levels: Not Supported 00:27:57.744 Endurance Groups: Not Supported 00:27:57.744 Predictable Latency Mode: Not Supported 00:27:57.744 Traffic Based Keep ALive: Not Supported 00:27:57.744 Namespace Granularity: Not Supported 00:27:57.744 SQ Associations: Not Supported 00:27:57.744 UUID List: Not Supported 00:27:57.744 Multi-Domain Subsystem: Not Supported 00:27:57.744 Fixed Capacity Management: Not Supported 00:27:57.744 Variable Capacity Management: Not Supported 00:27:57.744 Delete Endurance Group: Not Supported 00:27:57.744 Delete NVM Set: Not Supported 00:27:57.744 Extended LBA Formats Supported: Not Supported 00:27:57.744 Flexible Data Placement Supported: Not Supported 00:27:57.744 00:27:57.744 Controller Memory Buffer Support 00:27:57.744 ================================ 00:27:57.744 Supported: No 00:27:57.744 00:27:57.744 Persistent Memory Region Support 00:27:57.744 ================================ 00:27:57.744 Supported: No 00:27:57.744 00:27:57.744 Admin Command Set Attributes 00:27:57.744 ============================ 00:27:57.744 Security Send/Receive: Not Supported 00:27:57.744 Format NVM: Not Supported 00:27:57.744 Firmware Activate/Download: Not Supported 00:27:57.744 Namespace Management: Not Supported 00:27:57.744 Device Self-Test: Not Supported 00:27:57.744 Directives: Not Supported 00:27:57.744 NVMe-MI: Not Supported 00:27:57.744 Virtualization Management: Not Supported 00:27:57.744 Doorbell Buffer Config: Not Supported 00:27:57.744 Get LBA Status Capability: Not Supported 00:27:57.744 Command & Feature Lockdown Capability: Not Supported 00:27:57.744 Abort Command Limit: 1 00:27:57.744 Async Event Request Limit: 4 00:27:57.744 Number of Firmware Slots: N/A 00:27:57.744 Firmware Slot 1 Read-Only: N/A 00:27:57.744 Firmware Activation Without Reset: N/A 00:27:57.744 Multiple Update Detection Support: N/A 00:27:57.744 Firmware Update Granularity: No Information Provided 00:27:57.744 Per-Namespace SMART Log: No 00:27:57.744 Asymmetric Namespace Access Log Page: Not Supported 00:27:57.744 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:57.744 Command Effects Log Page: Not Supported 00:27:57.744 Get Log Page Extended Data: Supported 00:27:57.744 Telemetry Log Pages: Not Supported 00:27:57.744 Persistent Event Log Pages: Not Supported 00:27:57.744 Supported Log Pages Log Page: May Support 00:27:57.744 Commands Supported & Effects Log Page: Not Supported 00:27:57.744 Feature Identifiers & Effects Log Page:May Support 00:27:57.744 NVMe-MI Commands & Effects Log Page: May Support 00:27:57.744 Data Area 4 for Telemetry Log: Not Supported 00:27:57.744 Error Log Page Entries Supported: 128 00:27:57.744 Keep Alive: Not Supported 00:27:57.744 00:27:57.744 NVM Command Set Attributes 00:27:57.744 ========================== 00:27:57.744 Submission Queue Entry Size 00:27:57.744 Max: 1 00:27:57.744 Min: 1 00:27:57.744 Completion Queue Entry Size 00:27:57.744 Max: 1 00:27:57.744 Min: 1 00:27:57.744 Number of Namespaces: 0 00:27:57.744 Compare Command: Not Supported 00:27:57.744 Write Uncorrectable Command: Not Supported 00:27:57.744 Dataset Management Command: Not Supported 00:27:57.744 Write Zeroes Command: Not Supported 00:27:57.744 Set Features Save Field: Not Supported 00:27:57.744 Reservations: Not Supported 00:27:57.744 Timestamp: Not Supported 00:27:57.744 Copy: Not Supported 00:27:57.744 Volatile Write Cache: Not Present 00:27:57.744 Atomic Write Unit (Normal): 1 00:27:57.744 Atomic Write Unit (PFail): 1 00:27:57.744 Atomic Compare & Write Unit: 1 00:27:57.744 Fused Compare & Write: Supported 00:27:57.744 Scatter-Gather List 00:27:57.744 SGL Command Set: Supported 00:27:57.744 SGL Keyed: Supported 00:27:57.744 SGL Bit Bucket Descriptor: Not Supported 00:27:57.744 SGL Metadata Pointer: Not Supported 00:27:57.744 Oversized SGL: Not Supported 00:27:57.744 SGL Metadata Address: Not Supported 00:27:57.744 SGL Offset: Supported 00:27:57.744 Transport SGL Data Block: Not Supported 00:27:57.744 Replay Protected Memory Block: Not Supported 00:27:57.744 00:27:57.744 Firmware Slot Information 00:27:57.744 ========================= 00:27:57.744 Active slot: 0 00:27:57.744 00:27:57.744 00:27:57.744 Error Log 00:27:57.744 ========= 00:27:57.744 00:27:57.744 Active Namespaces 00:27:57.744 ================= 00:27:57.744 Discovery Log Page 00:27:57.744 ================== 00:27:57.744 Generation Counter: 2 00:27:57.744 Number of Records: 2 00:27:57.744 Record Format: 0 00:27:57.744 00:27:57.744 Discovery Log Entry 0 00:27:57.744 ---------------------- 00:27:57.744 Transport Type: 3 (TCP) 00:27:57.744 Address Family: 1 (IPv4) 00:27:57.744 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:57.744 Entry Flags: 00:27:57.744 Duplicate Returned Information: 1 00:27:57.744 Explicit Persistent Connection Support for Discovery: 1 00:27:57.744 Transport Requirements: 00:27:57.744 Secure Channel: Not Required 00:27:57.744 Port ID: 0 (0x0000) 00:27:57.744 Controller ID: 65535 (0xffff) 00:27:57.744 Admin Max SQ Size: 128 00:27:57.744 Transport Service Identifier: 4420 00:27:57.744 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:57.744 Transport Address: 10.0.0.2 00:27:57.744 Discovery Log Entry 1 00:27:57.744 ---------------------- 00:27:57.744 Transport Type: 3 (TCP) 00:27:57.744 Address Family: 1 (IPv4) 00:27:57.744 Subsystem Type: 2 (NVM Subsystem) 00:27:57.744 Entry Flags: 00:27:57.744 Duplicate Returned Information: 0 00:27:57.744 Explicit Persistent Connection Support for Discovery: 0 00:27:57.744 Transport Requirements: 00:27:57.744 Secure Channel: Not Required 00:27:57.744 Port ID: 0 (0x0000) 00:27:57.744 Controller ID: 65535 (0xffff) 00:27:57.744 Admin Max SQ Size: 128 00:27:57.744 Transport Service Identifier: 4420 00:27:57.744 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:57.744 Transport Address: 10.0.0.2 [2024-07-20 18:03:32.504958] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:57.744 [2024-07-20 18:03:32.504984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.744 [2024-07-20 18:03:32.504996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.744 [2024-07-20 18:03:32.505007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.744 [2024-07-20 18:03:32.505016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.744 [2024-07-20 18:03:32.505034] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.744 [2024-07-20 18:03:32.505044] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.744 [2024-07-20 18:03:32.505050] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.744 [2024-07-20 18:03:32.505061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.744 [2024-07-20 18:03:32.505086] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.744 [2024-07-20 18:03:32.505295] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.745 [2024-07-20 18:03:32.505307] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.745 [2024-07-20 18:03:32.505315] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.505321] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9610) on tqpair=0x22a0120 00:27:57.745 [2024-07-20 18:03:32.505335] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.505343] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.505349] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.745 [2024-07-20 18:03:32.505359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.745 [2024-07-20 18:03:32.505385] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.745 [2024-07-20 18:03:32.505613] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.745 [2024-07-20 18:03:32.505629] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.745 [2024-07-20 18:03:32.505636] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.505643] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9610) on tqpair=0x22a0120 00:27:57.745 [2024-07-20 18:03:32.505652] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:57.745 [2024-07-20 18:03:32.505661] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:57.745 [2024-07-20 18:03:32.505677] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.505687] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.505693] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.745 [2024-07-20 18:03:32.505704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.745 [2024-07-20 18:03:32.505729] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.745 [2024-07-20 18:03:32.505942] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.745 [2024-07-20 18:03:32.505958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.745 [2024-07-20 18:03:32.505965] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.505972] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9610) on tqpair=0x22a0120 00:27:57.745 [2024-07-20 18:03:32.505990] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.506000] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.506007] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.745 [2024-07-20 18:03:32.506018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.745 [2024-07-20 18:03:32.506038] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.745 [2024-07-20 18:03:32.506290] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.745 [2024-07-20 18:03:32.506302] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.745 [2024-07-20 18:03:32.506309] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.506316] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9610) on tqpair=0x22a0120 00:27:57.745 [2024-07-20 18:03:32.506333] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.506343] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.506350] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.745 [2024-07-20 18:03:32.506360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.745 [2024-07-20 18:03:32.506380] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.745 [2024-07-20 18:03:32.506630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.745 [2024-07-20 18:03:32.506642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.745 [2024-07-20 18:03:32.506649] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.506656] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9610) on tqpair=0x22a0120 00:27:57.745 [2024-07-20 18:03:32.506673] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.506683] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.506689] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.745 [2024-07-20 18:03:32.506700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.745 [2024-07-20 18:03:32.506720] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.745 [2024-07-20 18:03:32.506970] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.745 [2024-07-20 18:03:32.506984] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.745 [2024-07-20 18:03:32.506991] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.506998] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9610) on tqpair=0x22a0120 00:27:57.745 [2024-07-20 18:03:32.507015] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.507025] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.507031] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.745 [2024-07-20 18:03:32.507042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.745 [2024-07-20 18:03:32.507063] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.745 [2024-07-20 18:03:32.507263] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.745 [2024-07-20 18:03:32.507276] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.745 [2024-07-20 18:03:32.507283] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.507290] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9610) on tqpair=0x22a0120 00:27:57.745 [2024-07-20 18:03:32.507307] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.507317] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.507323] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.745 [2024-07-20 18:03:32.507334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.745 [2024-07-20 18:03:32.507354] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.745 [2024-07-20 18:03:32.507554] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.745 [2024-07-20 18:03:32.507566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.745 [2024-07-20 18:03:32.507573] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.507580] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9610) on tqpair=0x22a0120 00:27:57.745 [2024-07-20 18:03:32.507597] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.507606] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.507613] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.745 [2024-07-20 18:03:32.507623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.745 [2024-07-20 18:03:32.507643] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.745 [2024-07-20 18:03:32.507904] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.745 [2024-07-20 18:03:32.507920] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.745 [2024-07-20 18:03:32.507927] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.507934] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9610) on tqpair=0x22a0120 00:27:57.745 [2024-07-20 18:03:32.507952] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.507962] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.507968] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.745 [2024-07-20 18:03:32.507979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.745 [2024-07-20 18:03:32.508000] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.745 [2024-07-20 18:03:32.508200] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.745 [2024-07-20 18:03:32.508216] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.745 [2024-07-20 18:03:32.508223] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.508230] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9610) on tqpair=0x22a0120 00:27:57.745 [2024-07-20 18:03:32.508247] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.508257] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.745 [2024-07-20 18:03:32.508264] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.745 [2024-07-20 18:03:32.508274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.745 [2024-07-20 18:03:32.508295] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.746 [2024-07-20 18:03:32.508498] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.746 [2024-07-20 18:03:32.508513] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.746 [2024-07-20 18:03:32.508520] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.746 [2024-07-20 18:03:32.508527] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9610) on tqpair=0x22a0120 00:27:57.746 [2024-07-20 18:03:32.508544] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.746 [2024-07-20 18:03:32.508554] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.746 [2024-07-20 18:03:32.508561] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.746 [2024-07-20 18:03:32.508571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.746 [2024-07-20 18:03:32.508591] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.746 [2024-07-20 18:03:32.512798] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.746 [2024-07-20 18:03:32.512826] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.746 [2024-07-20 18:03:32.512833] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.746 [2024-07-20 18:03:32.512856] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9610) on tqpair=0x22a0120 00:27:57.746 [2024-07-20 18:03:32.512876] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:57.746 [2024-07-20 18:03:32.512886] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:57.746 [2024-07-20 18:03:32.512893] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a0120) 00:27:57.746 [2024-07-20 18:03:32.512903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.746 [2024-07-20 18:03:32.512926] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f9610, cid 3, qid 0 00:27:57.746 [2024-07-20 18:03:32.513182] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:57.746 [2024-07-20 18:03:32.513198] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:57.746 [2024-07-20 18:03:32.513205] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:57.746 [2024-07-20 18:03:32.513212] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22f9610) on tqpair=0x22a0120 00:27:57.746 [2024-07-20 18:03:32.513227] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:27:57.746 00:27:57.746 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:58.005 [2024-07-20 18:03:32.547476] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:58.005 [2024-07-20 18:03:32.547521] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041541 ] 00:27:58.005 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.005 [2024-07-20 18:03:32.579642] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:58.005 [2024-07-20 18:03:32.579694] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:58.005 [2024-07-20 18:03:32.579703] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:58.005 [2024-07-20 18:03:32.579717] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:58.005 [2024-07-20 18:03:32.579729] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:58.005 [2024-07-20 18:03:32.580046] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:58.005 [2024-07-20 18:03:32.580086] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf30120 0 00:27:58.005 [2024-07-20 18:03:32.590810] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:58.005 [2024-07-20 18:03:32.590828] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:58.005 [2024-07-20 18:03:32.590836] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:58.005 [2024-07-20 18:03:32.590842] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:58.005 [2024-07-20 18:03:32.590892] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.590904] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.590912] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf30120) 00:27:58.005 [2024-07-20 18:03:32.590926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:58.005 [2024-07-20 18:03:32.590952] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf891f0, cid 0, qid 0 00:27:58.005 [2024-07-20 18:03:32.598804] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.005 [2024-07-20 18:03:32.598822] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.005 [2024-07-20 18:03:32.598829] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.598837] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf891f0) on tqpair=0xf30120 00:27:58.005 [2024-07-20 18:03:32.598850] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:58.005 [2024-07-20 18:03:32.598875] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:58.005 [2024-07-20 18:03:32.598885] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:58.005 [2024-07-20 18:03:32.598907] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.598916] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.598923] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf30120) 00:27:58.005 [2024-07-20 18:03:32.598935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.005 [2024-07-20 18:03:32.598959] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf891f0, cid 0, qid 0 00:27:58.005 [2024-07-20 18:03:32.599197] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.005 [2024-07-20 18:03:32.599209] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.005 [2024-07-20 18:03:32.599216] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.599223] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf891f0) on tqpair=0xf30120 00:27:58.005 [2024-07-20 18:03:32.599236] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:58.005 [2024-07-20 18:03:32.599250] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:58.005 [2024-07-20 18:03:32.599263] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.599270] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.599277] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf30120) 00:27:58.005 [2024-07-20 18:03:32.599288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.005 [2024-07-20 18:03:32.599309] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf891f0, cid 0, qid 0 00:27:58.005 [2024-07-20 18:03:32.599547] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.005 [2024-07-20 18:03:32.599562] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.005 [2024-07-20 18:03:32.599573] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.599581] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf891f0) on tqpair=0xf30120 00:27:58.005 [2024-07-20 18:03:32.599590] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:58.005 [2024-07-20 18:03:32.599604] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:58.005 [2024-07-20 18:03:32.599618] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.599625] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.599632] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf30120) 00:27:58.005 [2024-07-20 18:03:32.599643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.005 [2024-07-20 18:03:32.599664] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf891f0, cid 0, qid 0 00:27:58.005 [2024-07-20 18:03:32.599908] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.005 [2024-07-20 18:03:32.599924] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.005 [2024-07-20 18:03:32.599931] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.599938] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf891f0) on tqpair=0xf30120 00:27:58.005 [2024-07-20 18:03:32.599947] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:58.005 [2024-07-20 18:03:32.599964] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.599974] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.599980] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf30120) 00:27:58.005 [2024-07-20 18:03:32.599991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.005 [2024-07-20 18:03:32.600013] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf891f0, cid 0, qid 0 00:27:58.005 [2024-07-20 18:03:32.600216] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.005 [2024-07-20 18:03:32.600231] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.005 [2024-07-20 18:03:32.600238] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.600245] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf891f0) on tqpair=0xf30120 00:27:58.005 [2024-07-20 18:03:32.600253] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:58.005 [2024-07-20 18:03:32.600262] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:58.005 [2024-07-20 18:03:32.600276] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:58.005 [2024-07-20 18:03:32.600386] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:58.005 [2024-07-20 18:03:32.600394] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:58.005 [2024-07-20 18:03:32.600407] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.600415] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.600421] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf30120) 00:27:58.005 [2024-07-20 18:03:32.600432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.005 [2024-07-20 18:03:32.600453] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf891f0, cid 0, qid 0 00:27:58.005 [2024-07-20 18:03:32.600743] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.005 [2024-07-20 18:03:32.600758] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.005 [2024-07-20 18:03:32.600766] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.600773] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf891f0) on tqpair=0xf30120 00:27:58.005 [2024-07-20 18:03:32.600781] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:58.005 [2024-07-20 18:03:32.600805] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.600816] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.600823] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf30120) 00:27:58.005 [2024-07-20 18:03:32.600834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.005 [2024-07-20 18:03:32.600855] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf891f0, cid 0, qid 0 00:27:58.005 [2024-07-20 18:03:32.601092] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.005 [2024-07-20 18:03:32.601104] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.005 [2024-07-20 18:03:32.601111] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.601118] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf891f0) on tqpair=0xf30120 00:27:58.005 [2024-07-20 18:03:32.601126] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:58.005 [2024-07-20 18:03:32.601135] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:58.005 [2024-07-20 18:03:32.601148] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:58.005 [2024-07-20 18:03:32.601162] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:58.005 [2024-07-20 18:03:32.601178] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.005 [2024-07-20 18:03:32.601187] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf30120) 00:27:58.005 [2024-07-20 18:03:32.601197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.005 [2024-07-20 18:03:32.601218] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf891f0, cid 0, qid 0 00:27:58.005 [2024-07-20 18:03:32.601557] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.005 [2024-07-20 18:03:32.601573] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.006 [2024-07-20 18:03:32.601580] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.601587] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf30120): datao=0, datal=4096, cccid=0 00:27:58.006 [2024-07-20 18:03:32.601595] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf891f0) on tqpair(0xf30120): expected_datao=0, payload_size=4096 00:27:58.006 [2024-07-20 18:03:32.601603] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.601682] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.601692] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.641996] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.006 [2024-07-20 18:03:32.642016] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.006 [2024-07-20 18:03:32.642024] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.642031] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf891f0) on tqpair=0xf30120 00:27:58.006 [2024-07-20 18:03:32.642050] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:58.006 [2024-07-20 18:03:32.642061] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:58.006 [2024-07-20 18:03:32.642069] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:58.006 [2024-07-20 18:03:32.642076] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:58.006 [2024-07-20 18:03:32.642084] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:58.006 [2024-07-20 18:03:32.642092] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:58.006 [2024-07-20 18:03:32.642107] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:58.006 [2024-07-20 18:03:32.642120] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.642128] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.642135] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf30120) 00:27:58.006 [2024-07-20 18:03:32.642146] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:58.006 [2024-07-20 18:03:32.642169] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf891f0, cid 0, qid 0 00:27:58.006 [2024-07-20 18:03:32.642412] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.006 [2024-07-20 18:03:32.642424] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.006 [2024-07-20 18:03:32.642432] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.642439] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf891f0) on tqpair=0xf30120 00:27:58.006 [2024-07-20 18:03:32.642449] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.642457] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.642463] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf30120) 00:27:58.006 [2024-07-20 18:03:32.642473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.006 [2024-07-20 18:03:32.642483] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.642490] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.642497] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf30120) 00:27:58.006 [2024-07-20 18:03:32.642505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.006 [2024-07-20 18:03:32.642515] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.642522] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.642544] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf30120) 00:27:58.006 [2024-07-20 18:03:32.642553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.006 [2024-07-20 18:03:32.642563] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.642570] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.642576] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf30120) 00:27:58.006 [2024-07-20 18:03:32.642584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.006 [2024-07-20 18:03:32.642593] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:58.006 [2024-07-20 18:03:32.642615] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:58.006 [2024-07-20 18:03:32.642628] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.642636] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf30120) 00:27:58.006 [2024-07-20 18:03:32.642646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.006 [2024-07-20 18:03:32.642668] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf891f0, cid 0, qid 0 00:27:58.006 [2024-07-20 18:03:32.642693] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89350, cid 1, qid 0 00:27:58.006 [2024-07-20 18:03:32.642702] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf894b0, cid 2, qid 0 00:27:58.006 [2024-07-20 18:03:32.642710] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89610, cid 3, qid 0 00:27:58.006 [2024-07-20 18:03:32.642718] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89770, cid 4, qid 0 00:27:58.006 [2024-07-20 18:03:32.646807] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.006 [2024-07-20 18:03:32.646824] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.006 [2024-07-20 18:03:32.646831] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.646838] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89770) on tqpair=0xf30120 00:27:58.006 [2024-07-20 18:03:32.646846] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:58.006 [2024-07-20 18:03:32.646855] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:58.006 [2024-07-20 18:03:32.646869] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:58.006 [2024-07-20 18:03:32.646895] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:58.006 [2024-07-20 18:03:32.646907] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.646914] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.646921] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf30120) 00:27:58.006 [2024-07-20 18:03:32.646932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:58.006 [2024-07-20 18:03:32.646954] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89770, cid 4, qid 0 00:27:58.006 [2024-07-20 18:03:32.647192] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.006 [2024-07-20 18:03:32.647208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.006 [2024-07-20 18:03:32.647215] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.647222] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89770) on tqpair=0xf30120 00:27:58.006 [2024-07-20 18:03:32.647291] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:58.006 [2024-07-20 18:03:32.647310] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:58.006 [2024-07-20 18:03:32.647325] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.647347] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf30120) 00:27:58.006 [2024-07-20 18:03:32.647358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.006 [2024-07-20 18:03:32.647380] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89770, cid 4, qid 0 00:27:58.006 [2024-07-20 18:03:32.647646] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.006 [2024-07-20 18:03:32.647662] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.006 [2024-07-20 18:03:32.647669] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.647676] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf30120): datao=0, datal=4096, cccid=4 00:27:58.006 [2024-07-20 18:03:32.647684] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf89770) on tqpair(0xf30120): expected_datao=0, payload_size=4096 00:27:58.006 [2024-07-20 18:03:32.647691] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.647702] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.647710] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.647842] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.006 [2024-07-20 18:03:32.647855] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.006 [2024-07-20 18:03:32.647862] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.647869] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89770) on tqpair=0xf30120 00:27:58.006 [2024-07-20 18:03:32.647883] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:58.006 [2024-07-20 18:03:32.647900] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:58.006 [2024-07-20 18:03:32.647917] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:58.006 [2024-07-20 18:03:32.647931] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.647938] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf30120) 00:27:58.006 [2024-07-20 18:03:32.647949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.006 [2024-07-20 18:03:32.647971] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89770, cid 4, qid 0 00:27:58.006 [2024-07-20 18:03:32.648201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.006 [2024-07-20 18:03:32.648216] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.006 [2024-07-20 18:03:32.648224] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.648230] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf30120): datao=0, datal=4096, cccid=4 00:27:58.006 [2024-07-20 18:03:32.648238] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf89770) on tqpair(0xf30120): expected_datao=0, payload_size=4096 00:27:58.006 [2024-07-20 18:03:32.648246] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.648256] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.648264] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.006 [2024-07-20 18:03:32.648392] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.006 [2024-07-20 18:03:32.648404] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.007 [2024-07-20 18:03:32.648411] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.648418] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89770) on tqpair=0xf30120 00:27:58.007 [2024-07-20 18:03:32.648438] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:58.007 [2024-07-20 18:03:32.648456] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:58.007 [2024-07-20 18:03:32.648470] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.648478] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf30120) 00:27:58.007 [2024-07-20 18:03:32.648509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.007 [2024-07-20 18:03:32.648532] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89770, cid 4, qid 0 00:27:58.007 [2024-07-20 18:03:32.648820] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.007 [2024-07-20 18:03:32.648837] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.007 [2024-07-20 18:03:32.648844] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.648851] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf30120): datao=0, datal=4096, cccid=4 00:27:58.007 [2024-07-20 18:03:32.648858] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf89770) on tqpair(0xf30120): expected_datao=0, payload_size=4096 00:27:58.007 [2024-07-20 18:03:32.648866] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.648876] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.648884] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.649010] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.007 [2024-07-20 18:03:32.649022] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.007 [2024-07-20 18:03:32.649029] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.649036] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89770) on tqpair=0xf30120 00:27:58.007 [2024-07-20 18:03:32.649049] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:58.007 [2024-07-20 18:03:32.649064] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:58.007 [2024-07-20 18:03:32.649080] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:58.007 [2024-07-20 18:03:32.649091] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:58.007 [2024-07-20 18:03:32.649100] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:58.007 [2024-07-20 18:03:32.649109] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:58.007 [2024-07-20 18:03:32.649117] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:58.007 [2024-07-20 18:03:32.649141] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:58.007 [2024-07-20 18:03:32.649163] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.649173] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf30120) 00:27:58.007 [2024-07-20 18:03:32.649183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.007 [2024-07-20 18:03:32.649194] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.649201] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.649208] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf30120) 00:27:58.007 [2024-07-20 18:03:32.649217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.007 [2024-07-20 18:03:32.649241] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89770, cid 4, qid 0 00:27:58.007 [2024-07-20 18:03:32.649268] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf898d0, cid 5, qid 0 00:27:58.007 [2024-07-20 18:03:32.649535] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.007 [2024-07-20 18:03:32.649555] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.007 [2024-07-20 18:03:32.649563] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.649570] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89770) on tqpair=0xf30120 00:27:58.007 [2024-07-20 18:03:32.649581] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.007 [2024-07-20 18:03:32.649590] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.007 [2024-07-20 18:03:32.649597] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.649603] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf898d0) on tqpair=0xf30120 00:27:58.007 [2024-07-20 18:03:32.649620] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.649629] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf30120) 00:27:58.007 [2024-07-20 18:03:32.649640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.007 [2024-07-20 18:03:32.649661] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf898d0, cid 5, qid 0 00:27:58.007 [2024-07-20 18:03:32.650000] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.007 [2024-07-20 18:03:32.650015] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.007 [2024-07-20 18:03:32.650022] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.650029] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf898d0) on tqpair=0xf30120 00:27:58.007 [2024-07-20 18:03:32.650045] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.650054] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf30120) 00:27:58.007 [2024-07-20 18:03:32.650064] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.007 [2024-07-20 18:03:32.650085] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf898d0, cid 5, qid 0 00:27:58.007 [2024-07-20 18:03:32.650318] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.007 [2024-07-20 18:03:32.650333] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.007 [2024-07-20 18:03:32.650340] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.650347] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf898d0) on tqpair=0xf30120 00:27:58.007 [2024-07-20 18:03:32.650363] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.650372] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf30120) 00:27:58.007 [2024-07-20 18:03:32.650383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.007 [2024-07-20 18:03:32.650403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf898d0, cid 5, qid 0 00:27:58.007 [2024-07-20 18:03:32.650640] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.007 [2024-07-20 18:03:32.650655] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.007 [2024-07-20 18:03:32.650662] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.650669] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf898d0) on tqpair=0xf30120 00:27:58.007 [2024-07-20 18:03:32.650688] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.650698] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf30120) 00:27:58.007 [2024-07-20 18:03:32.650709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.007 [2024-07-20 18:03:32.650720] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.650728] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf30120) 00:27:58.007 [2024-07-20 18:03:32.650741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.007 [2024-07-20 18:03:32.650753] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.650760] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xf30120) 00:27:58.007 [2024-07-20 18:03:32.650770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.007 [2024-07-20 18:03:32.650781] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.650788] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf30120) 00:27:58.007 [2024-07-20 18:03:32.654809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.007 [2024-07-20 18:03:32.654836] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf898d0, cid 5, qid 0 00:27:58.007 [2024-07-20 18:03:32.654863] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89770, cid 4, qid 0 00:27:58.007 [2024-07-20 18:03:32.654871] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89a30, cid 6, qid 0 00:27:58.007 [2024-07-20 18:03:32.654879] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89b90, cid 7, qid 0 00:27:58.007 [2024-07-20 18:03:32.655168] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.007 [2024-07-20 18:03:32.655184] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.007 [2024-07-20 18:03:32.655192] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.655198] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf30120): datao=0, datal=8192, cccid=5 00:27:58.007 [2024-07-20 18:03:32.655206] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf898d0) on tqpair(0xf30120): expected_datao=0, payload_size=8192 00:27:58.007 [2024-07-20 18:03:32.655214] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.655443] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.655453] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.655462] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.007 [2024-07-20 18:03:32.655472] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.007 [2024-07-20 18:03:32.655478] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.655485] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf30120): datao=0, datal=512, cccid=4 00:27:58.007 [2024-07-20 18:03:32.655493] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf89770) on tqpair(0xf30120): expected_datao=0, payload_size=512 00:27:58.007 [2024-07-20 18:03:32.655500] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.655510] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.655517] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.655525] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.007 [2024-07-20 18:03:32.655534] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.007 [2024-07-20 18:03:32.655541] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.007 [2024-07-20 18:03:32.655547] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf30120): datao=0, datal=512, cccid=6 00:27:58.008 [2024-07-20 18:03:32.655555] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf89a30) on tqpair(0xf30120): expected_datao=0, payload_size=512 00:27:58.008 [2024-07-20 18:03:32.655563] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.008 [2024-07-20 18:03:32.655572] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.008 [2024-07-20 18:03:32.655583] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.008 [2024-07-20 18:03:32.655593] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.008 [2024-07-20 18:03:32.655602] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.008 [2024-07-20 18:03:32.655608] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.008 [2024-07-20 18:03:32.655615] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf30120): datao=0, datal=4096, cccid=7 00:27:58.008 [2024-07-20 18:03:32.655622] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf89b90) on tqpair(0xf30120): expected_datao=0, payload_size=4096 00:27:58.008 [2024-07-20 18:03:32.655630] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.008 [2024-07-20 18:03:32.655640] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.008 [2024-07-20 18:03:32.655647] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.008 [2024-07-20 18:03:32.655659] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.008 [2024-07-20 18:03:32.655668] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.008 [2024-07-20 18:03:32.655675] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.008 [2024-07-20 18:03:32.655682] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf898d0) on tqpair=0xf30120 00:27:58.008 [2024-07-20 18:03:32.655701] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.008 [2024-07-20 18:03:32.655712] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.008 [2024-07-20 18:03:32.655719] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.008 [2024-07-20 18:03:32.655726] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89770) on tqpair=0xf30120 00:27:58.008 [2024-07-20 18:03:32.655739] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.008 [2024-07-20 18:03:32.655750] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.008 [2024-07-20 18:03:32.655757] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.008 [2024-07-20 18:03:32.655778] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89a30) on tqpair=0xf30120 00:27:58.008 [2024-07-20 18:03:32.655791] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.008 [2024-07-20 18:03:32.655815] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.008 [2024-07-20 18:03:32.655837] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.008 [2024-07-20 18:03:32.655844] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89b90) on tqpair=0xf30120 00:27:58.008 ===================================================== 00:27:58.008 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:58.008 ===================================================== 00:27:58.008 Controller Capabilities/Features 00:27:58.008 ================================ 00:27:58.008 Vendor ID: 8086 00:27:58.008 Subsystem Vendor ID: 8086 00:27:58.008 Serial Number: SPDK00000000000001 00:27:58.008 Model Number: SPDK bdev Controller 00:27:58.008 Firmware Version: 24.05.1 00:27:58.008 Recommended Arb Burst: 6 00:27:58.008 IEEE OUI Identifier: e4 d2 5c 00:27:58.008 Multi-path I/O 00:27:58.008 May have multiple subsystem ports: Yes 00:27:58.008 May have multiple controllers: Yes 00:27:58.008 Associated with SR-IOV VF: No 00:27:58.008 Max Data Transfer Size: 131072 00:27:58.008 Max Number of Namespaces: 32 00:27:58.008 Max Number of I/O Queues: 127 00:27:58.008 NVMe Specification Version (VS): 1.3 00:27:58.008 NVMe Specification Version (Identify): 1.3 00:27:58.008 Maximum Queue Entries: 128 00:27:58.008 Contiguous Queues Required: Yes 00:27:58.008 Arbitration Mechanisms Supported 00:27:58.008 Weighted Round Robin: Not Supported 00:27:58.008 Vendor Specific: Not Supported 00:27:58.008 Reset Timeout: 15000 ms 00:27:58.008 Doorbell Stride: 4 bytes 00:27:58.008 NVM Subsystem Reset: Not Supported 00:27:58.008 Command Sets Supported 00:27:58.008 NVM Command Set: Supported 00:27:58.008 Boot Partition: Not Supported 00:27:58.008 Memory Page Size Minimum: 4096 bytes 00:27:58.008 Memory Page Size Maximum: 4096 bytes 00:27:58.008 Persistent Memory Region: Not Supported 00:27:58.008 Optional Asynchronous Events Supported 00:27:58.008 Namespace Attribute Notices: Supported 00:27:58.008 Firmware Activation Notices: Not Supported 00:27:58.008 ANA Change Notices: Not Supported 00:27:58.008 PLE Aggregate Log Change Notices: Not Supported 00:27:58.008 LBA Status Info Alert Notices: Not Supported 00:27:58.008 EGE Aggregate Log Change Notices: Not Supported 00:27:58.008 Normal NVM Subsystem Shutdown event: Not Supported 00:27:58.008 Zone Descriptor Change Notices: Not Supported 00:27:58.008 Discovery Log Change Notices: Not Supported 00:27:58.008 Controller Attributes 00:27:58.008 128-bit Host Identifier: Supported 00:27:58.008 Non-Operational Permissive Mode: Not Supported 00:27:58.008 NVM Sets: Not Supported 00:27:58.008 Read Recovery Levels: Not Supported 00:27:58.008 Endurance Groups: Not Supported 00:27:58.008 Predictable Latency Mode: Not Supported 00:27:58.008 Traffic Based Keep ALive: Not Supported 00:27:58.008 Namespace Granularity: Not Supported 00:27:58.008 SQ Associations: Not Supported 00:27:58.008 UUID List: Not Supported 00:27:58.008 Multi-Domain Subsystem: Not Supported 00:27:58.008 Fixed Capacity Management: Not Supported 00:27:58.008 Variable Capacity Management: Not Supported 00:27:58.008 Delete Endurance Group: Not Supported 00:27:58.008 Delete NVM Set: Not Supported 00:27:58.008 Extended LBA Formats Supported: Not Supported 00:27:58.008 Flexible Data Placement Supported: Not Supported 00:27:58.008 00:27:58.008 Controller Memory Buffer Support 00:27:58.008 ================================ 00:27:58.008 Supported: No 00:27:58.008 00:27:58.008 Persistent Memory Region Support 00:27:58.008 ================================ 00:27:58.008 Supported: No 00:27:58.008 00:27:58.008 Admin Command Set Attributes 00:27:58.008 ============================ 00:27:58.008 Security Send/Receive: Not Supported 00:27:58.008 Format NVM: Not Supported 00:27:58.008 Firmware Activate/Download: Not Supported 00:27:58.008 Namespace Management: Not Supported 00:27:58.008 Device Self-Test: Not Supported 00:27:58.008 Directives: Not Supported 00:27:58.008 NVMe-MI: Not Supported 00:27:58.008 Virtualization Management: Not Supported 00:27:58.008 Doorbell Buffer Config: Not Supported 00:27:58.008 Get LBA Status Capability: Not Supported 00:27:58.008 Command & Feature Lockdown Capability: Not Supported 00:27:58.008 Abort Command Limit: 4 00:27:58.008 Async Event Request Limit: 4 00:27:58.008 Number of Firmware Slots: N/A 00:27:58.008 Firmware Slot 1 Read-Only: N/A 00:27:58.008 Firmware Activation Without Reset: N/A 00:27:58.008 Multiple Update Detection Support: N/A 00:27:58.008 Firmware Update Granularity: No Information Provided 00:27:58.008 Per-Namespace SMART Log: No 00:27:58.008 Asymmetric Namespace Access Log Page: Not Supported 00:27:58.008 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:58.008 Command Effects Log Page: Supported 00:27:58.008 Get Log Page Extended Data: Supported 00:27:58.008 Telemetry Log Pages: Not Supported 00:27:58.008 Persistent Event Log Pages: Not Supported 00:27:58.008 Supported Log Pages Log Page: May Support 00:27:58.008 Commands Supported & Effects Log Page: Not Supported 00:27:58.008 Feature Identifiers & Effects Log Page:May Support 00:27:58.008 NVMe-MI Commands & Effects Log Page: May Support 00:27:58.008 Data Area 4 for Telemetry Log: Not Supported 00:27:58.008 Error Log Page Entries Supported: 128 00:27:58.008 Keep Alive: Supported 00:27:58.008 Keep Alive Granularity: 10000 ms 00:27:58.008 00:27:58.008 NVM Command Set Attributes 00:27:58.008 ========================== 00:27:58.008 Submission Queue Entry Size 00:27:58.008 Max: 64 00:27:58.008 Min: 64 00:27:58.008 Completion Queue Entry Size 00:27:58.008 Max: 16 00:27:58.008 Min: 16 00:27:58.008 Number of Namespaces: 32 00:27:58.008 Compare Command: Supported 00:27:58.008 Write Uncorrectable Command: Not Supported 00:27:58.008 Dataset Management Command: Supported 00:27:58.008 Write Zeroes Command: Supported 00:27:58.008 Set Features Save Field: Not Supported 00:27:58.008 Reservations: Supported 00:27:58.008 Timestamp: Not Supported 00:27:58.008 Copy: Supported 00:27:58.008 Volatile Write Cache: Present 00:27:58.008 Atomic Write Unit (Normal): 1 00:27:58.008 Atomic Write Unit (PFail): 1 00:27:58.008 Atomic Compare & Write Unit: 1 00:27:58.008 Fused Compare & Write: Supported 00:27:58.008 Scatter-Gather List 00:27:58.008 SGL Command Set: Supported 00:27:58.008 SGL Keyed: Supported 00:27:58.008 SGL Bit Bucket Descriptor: Not Supported 00:27:58.008 SGL Metadata Pointer: Not Supported 00:27:58.008 Oversized SGL: Not Supported 00:27:58.008 SGL Metadata Address: Not Supported 00:27:58.008 SGL Offset: Supported 00:27:58.008 Transport SGL Data Block: Not Supported 00:27:58.008 Replay Protected Memory Block: Not Supported 00:27:58.008 00:27:58.008 Firmware Slot Information 00:27:58.008 ========================= 00:27:58.008 Active slot: 1 00:27:58.008 Slot 1 Firmware Revision: 24.05.1 00:27:58.008 00:27:58.008 00:27:58.008 Commands Supported and Effects 00:27:58.008 ============================== 00:27:58.008 Admin Commands 00:27:58.008 -------------- 00:27:58.008 Get Log Page (02h): Supported 00:27:58.008 Identify (06h): Supported 00:27:58.008 Abort (08h): Supported 00:27:58.008 Set Features (09h): Supported 00:27:58.008 Get Features (0Ah): Supported 00:27:58.008 Asynchronous Event Request (0Ch): Supported 00:27:58.009 Keep Alive (18h): Supported 00:27:58.009 I/O Commands 00:27:58.009 ------------ 00:27:58.009 Flush (00h): Supported LBA-Change 00:27:58.009 Write (01h): Supported LBA-Change 00:27:58.009 Read (02h): Supported 00:27:58.009 Compare (05h): Supported 00:27:58.009 Write Zeroes (08h): Supported LBA-Change 00:27:58.009 Dataset Management (09h): Supported LBA-Change 00:27:58.009 Copy (19h): Supported LBA-Change 00:27:58.009 Unknown (79h): Supported LBA-Change 00:27:58.009 Unknown (7Ah): Supported 00:27:58.009 00:27:58.009 Error Log 00:27:58.009 ========= 00:27:58.009 00:27:58.009 Arbitration 00:27:58.009 =========== 00:27:58.009 Arbitration Burst: 1 00:27:58.009 00:27:58.009 Power Management 00:27:58.009 ================ 00:27:58.009 Number of Power States: 1 00:27:58.009 Current Power State: Power State #0 00:27:58.009 Power State #0: 00:27:58.009 Max Power: 0.00 W 00:27:58.009 Non-Operational State: Operational 00:27:58.009 Entry Latency: Not Reported 00:27:58.009 Exit Latency: Not Reported 00:27:58.009 Relative Read Throughput: 0 00:27:58.009 Relative Read Latency: 0 00:27:58.009 Relative Write Throughput: 0 00:27:58.009 Relative Write Latency: 0 00:27:58.009 Idle Power: Not Reported 00:27:58.009 Active Power: Not Reported 00:27:58.009 Non-Operational Permissive Mode: Not Supported 00:27:58.009 00:27:58.009 Health Information 00:27:58.009 ================== 00:27:58.009 Critical Warnings: 00:27:58.009 Available Spare Space: OK 00:27:58.009 Temperature: OK 00:27:58.009 Device Reliability: OK 00:27:58.009 Read Only: No 00:27:58.009 Volatile Memory Backup: OK 00:27:58.009 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:58.009 Temperature Threshold: [2024-07-20 18:03:32.655963] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.655976] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf30120) 00:27:58.009 [2024-07-20 18:03:32.655987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.009 [2024-07-20 18:03:32.656010] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89b90, cid 7, qid 0 00:27:58.009 [2024-07-20 18:03:32.656246] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.009 [2024-07-20 18:03:32.656262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.009 [2024-07-20 18:03:32.656269] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.656275] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89b90) on tqpair=0xf30120 00:27:58.009 [2024-07-20 18:03:32.656314] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:58.009 [2024-07-20 18:03:32.656336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.009 [2024-07-20 18:03:32.656347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.009 [2024-07-20 18:03:32.656358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.009 [2024-07-20 18:03:32.656371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.009 [2024-07-20 18:03:32.656385] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.656393] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.656415] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf30120) 00:27:58.009 [2024-07-20 18:03:32.656426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.009 [2024-07-20 18:03:32.656448] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89610, cid 3, qid 0 00:27:58.009 [2024-07-20 18:03:32.656737] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.009 [2024-07-20 18:03:32.656749] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.009 [2024-07-20 18:03:32.656756] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.656763] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89610) on tqpair=0xf30120 00:27:58.009 [2024-07-20 18:03:32.656774] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.656782] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.656789] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf30120) 00:27:58.009 [2024-07-20 18:03:32.656812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.009 [2024-07-20 18:03:32.656839] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89610, cid 3, qid 0 00:27:58.009 [2024-07-20 18:03:32.657076] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.009 [2024-07-20 18:03:32.657091] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.009 [2024-07-20 18:03:32.657098] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.657105] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89610) on tqpair=0xf30120 00:27:58.009 [2024-07-20 18:03:32.657113] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:58.009 [2024-07-20 18:03:32.657121] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:58.009 [2024-07-20 18:03:32.657138] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.657147] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.657154] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf30120) 00:27:58.009 [2024-07-20 18:03:32.657165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.009 [2024-07-20 18:03:32.657186] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89610, cid 3, qid 0 00:27:58.009 [2024-07-20 18:03:32.657418] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.009 [2024-07-20 18:03:32.657433] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.009 [2024-07-20 18:03:32.657441] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.657448] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89610) on tqpair=0xf30120 00:27:58.009 [2024-07-20 18:03:32.657464] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.657474] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.657481] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf30120) 00:27:58.009 [2024-07-20 18:03:32.657491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.009 [2024-07-20 18:03:32.657512] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89610, cid 3, qid 0 00:27:58.009 [2024-07-20 18:03:32.657750] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.009 [2024-07-20 18:03:32.657769] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.009 [2024-07-20 18:03:32.657777] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.657784] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89610) on tqpair=0xf30120 00:27:58.009 [2024-07-20 18:03:32.657808] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.657818] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.657825] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf30120) 00:27:58.009 [2024-07-20 18:03:32.657836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.009 [2024-07-20 18:03:32.657857] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89610, cid 3, qid 0 00:27:58.009 [2024-07-20 18:03:32.658095] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.009 [2024-07-20 18:03:32.658107] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.009 [2024-07-20 18:03:32.658114] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.658121] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89610) on tqpair=0xf30120 00:27:58.009 [2024-07-20 18:03:32.658137] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.658146] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.009 [2024-07-20 18:03:32.658153] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf30120) 00:27:58.009 [2024-07-20 18:03:32.658163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.009 [2024-07-20 18:03:32.658183] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89610, cid 3, qid 0 00:27:58.009 [2024-07-20 18:03:32.658416] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.009 [2024-07-20 18:03:32.658428] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.010 [2024-07-20 18:03:32.658435] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.010 [2024-07-20 18:03:32.658442] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89610) on tqpair=0xf30120 00:27:58.010 [2024-07-20 18:03:32.658457] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.010 [2024-07-20 18:03:32.658467] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.010 [2024-07-20 18:03:32.658473] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf30120) 00:27:58.010 [2024-07-20 18:03:32.658484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.010 [2024-07-20 18:03:32.658504] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89610, cid 3, qid 0 00:27:58.010 [2024-07-20 18:03:32.658732] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.010 [2024-07-20 18:03:32.658747] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.010 [2024-07-20 18:03:32.658754] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.010 [2024-07-20 18:03:32.658761] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89610) on tqpair=0xf30120 00:27:58.010 [2024-07-20 18:03:32.658778] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.010 [2024-07-20 18:03:32.658787] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.010 [2024-07-20 18:03:32.662804] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf30120) 00:27:58.010 [2024-07-20 18:03:32.662819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.010 [2024-07-20 18:03:32.662842] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89610, cid 3, qid 0 00:27:58.010 [2024-07-20 18:03:32.663080] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.010 [2024-07-20 18:03:32.663093] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.010 [2024-07-20 18:03:32.663104] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.010 [2024-07-20 18:03:32.663112] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf89610) on tqpair=0xf30120 00:27:58.010 [2024-07-20 18:03:32.663126] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:27:58.010 0 Kelvin (-273 Celsius) 00:27:58.010 Available Spare: 0% 00:27:58.010 Available Spare Threshold: 0% 00:27:58.010 Life Percentage Used: 0% 00:27:58.010 Data Units Read: 0 00:27:58.010 Data Units Written: 0 00:27:58.010 Host Read Commands: 0 00:27:58.010 Host Write Commands: 0 00:27:58.010 Controller Busy Time: 0 minutes 00:27:58.010 Power Cycles: 0 00:27:58.010 Power On Hours: 0 hours 00:27:58.010 Unsafe Shutdowns: 0 00:27:58.010 Unrecoverable Media Errors: 0 00:27:58.010 Lifetime Error Log Entries: 0 00:27:58.010 Warning Temperature Time: 0 minutes 00:27:58.010 Critical Temperature Time: 0 minutes 00:27:58.010 00:27:58.010 Number of Queues 00:27:58.010 ================ 00:27:58.010 Number of I/O Submission Queues: 127 00:27:58.010 Number of I/O Completion Queues: 127 00:27:58.010 00:27:58.010 Active Namespaces 00:27:58.010 ================= 00:27:58.010 Namespace ID:1 00:27:58.010 Error Recovery Timeout: Unlimited 00:27:58.010 Command Set Identifier: NVM (00h) 00:27:58.010 Deallocate: Supported 00:27:58.010 Deallocated/Unwritten Error: Not Supported 00:27:58.010 Deallocated Read Value: Unknown 00:27:58.010 Deallocate in Write Zeroes: Not Supported 00:27:58.010 Deallocated Guard Field: 0xFFFF 00:27:58.010 Flush: Supported 00:27:58.010 Reservation: Supported 00:27:58.010 Namespace Sharing Capabilities: Multiple Controllers 00:27:58.010 Size (in LBAs): 131072 (0GiB) 00:27:58.010 Capacity (in LBAs): 131072 (0GiB) 00:27:58.010 Utilization (in LBAs): 131072 (0GiB) 00:27:58.010 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:58.010 EUI64: ABCDEF0123456789 00:27:58.010 UUID: 50c2ce68-91f2-4bde-87ca-0720e646b4d1 00:27:58.010 Thin Provisioning: Not Supported 00:27:58.010 Per-NS Atomic Units: Yes 00:27:58.010 Atomic Boundary Size (Normal): 0 00:27:58.010 Atomic Boundary Size (PFail): 0 00:27:58.010 Atomic Boundary Offset: 0 00:27:58.010 Maximum Single Source Range Length: 65535 00:27:58.010 Maximum Copy Length: 65535 00:27:58.010 Maximum Source Range Count: 1 00:27:58.010 NGUID/EUI64 Never Reused: No 00:27:58.010 Namespace Write Protected: No 00:27:58.010 Number of LBA Formats: 1 00:27:58.010 Current LBA Format: LBA Format #00 00:27:58.010 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:58.010 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:58.010 rmmod nvme_tcp 00:27:58.010 rmmod nvme_fabrics 00:27:58.010 rmmod nvme_keyring 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1041389 ']' 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1041389 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 1041389 ']' 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 1041389 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1041389 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1041389' 00:27:58.010 killing process with pid 1041389 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 1041389 00:27:58.010 18:03:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 1041389 00:27:58.269 18:03:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:58.269 18:03:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:58.269 18:03:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:58.269 18:03:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:58.269 18:03:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:58.269 18:03:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.269 18:03:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.269 18:03:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.797 18:03:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:00.797 00:28:00.797 real 0m5.376s 00:28:00.797 user 0m4.238s 00:28:00.797 sys 0m1.829s 00:28:00.797 18:03:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:00.797 18:03:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:00.797 ************************************ 00:28:00.797 END TEST nvmf_identify 00:28:00.797 ************************************ 00:28:00.797 18:03:35 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:00.797 18:03:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:00.797 18:03:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:00.797 18:03:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:00.797 ************************************ 00:28:00.797 START TEST nvmf_perf 00:28:00.797 ************************************ 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:00.797 * Looking for test storage... 00:28:00.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:00.797 18:03:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:02.696 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:02.697 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:02.697 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:02.697 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:02.697 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:02.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:28:02.697 00:28:02.697 --- 10.0.0.2 ping statistics --- 00:28:02.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.697 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:28:02.697 00:28:02.697 --- 10.0.0.1 ping statistics --- 00:28:02.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.697 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:02.697 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:02.955 18:03:37 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:02.955 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:02.955 18:03:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:02.955 18:03:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:02.955 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1043467 00:28:02.955 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:02.955 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1043467 00:28:02.955 18:03:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 1043467 ']' 00:28:02.955 18:03:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.955 18:03:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:02.955 18:03:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.955 18:03:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:02.955 18:03:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:02.955 [2024-07-20 18:03:37.546898] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:02.955 [2024-07-20 18:03:37.546976] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.955 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.955 [2024-07-20 18:03:37.615309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:02.955 [2024-07-20 18:03:37.711474] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.955 [2024-07-20 18:03:37.711535] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.955 [2024-07-20 18:03:37.711560] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.955 [2024-07-20 18:03:37.711574] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.955 [2024-07-20 18:03:37.711586] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.955 [2024-07-20 18:03:37.714817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.955 [2024-07-20 18:03:37.714870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.955 [2024-07-20 18:03:37.714967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:02.955 [2024-07-20 18:03:37.714971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.213 18:03:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:03.213 18:03:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:03.213 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:03.213 18:03:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:03.213 18:03:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:03.213 18:03:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.213 18:03:37 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:03.213 18:03:37 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:06.507 18:03:40 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:06.507 18:03:40 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:06.507 18:03:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:06.507 18:03:41 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:06.766 18:03:41 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:06.766 18:03:41 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:06.766 18:03:41 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:06.766 18:03:41 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:06.766 18:03:41 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:07.023 [2024-07-20 18:03:41.740495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.024 18:03:41 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:07.281 18:03:41 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:07.281 18:03:41 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:07.539 18:03:42 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:07.539 18:03:42 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:07.799 18:03:42 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.057 [2024-07-20 18:03:42.716185] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.057 18:03:42 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:08.315 18:03:42 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:08.315 18:03:42 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:08.315 18:03:42 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:08.315 18:03:42 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:09.684 Initializing NVMe Controllers 00:28:09.684 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:09.684 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:09.684 Initialization complete. Launching workers. 00:28:09.684 ======================================================== 00:28:09.684 Latency(us) 00:28:09.684 Device Information : IOPS MiB/s Average min max 00:28:09.684 PCIE (0000:88:00.0) NSID 1 from core 0: 84337.60 329.44 378.93 33.41 5245.59 00:28:09.684 ======================================================== 00:28:09.684 Total : 84337.60 329.44 378.93 33.41 5245.59 00:28:09.684 00:28:09.684 18:03:44 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:09.684 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.054 Initializing NVMe Controllers 00:28:11.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:11.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:11.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:11.054 Initialization complete. Launching workers. 00:28:11.054 ======================================================== 00:28:11.054 Latency(us) 00:28:11.054 Device Information : IOPS MiB/s Average min max 00:28:11.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.00 0.32 12654.02 400.60 46303.71 00:28:11.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19696.07 7954.15 47959.35 00:28:11.054 ======================================================== 00:28:11.054 Total : 133.00 0.52 15354.36 400.60 47959.35 00:28:11.054 00:28:11.054 18:03:45 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:11.054 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.423 Initializing NVMe Controllers 00:28:12.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:12.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:12.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:12.423 Initialization complete. Launching workers. 00:28:12.423 ======================================================== 00:28:12.423 Latency(us) 00:28:12.423 Device Information : IOPS MiB/s Average min max 00:28:12.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7372.99 28.80 4359.26 795.80 8402.33 00:28:12.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3855.00 15.06 8339.85 6871.32 16483.04 00:28:12.423 ======================================================== 00:28:12.423 Total : 11227.99 43.86 5725.94 795.80 16483.04 00:28:12.423 00:28:12.423 18:03:46 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:12.423 18:03:47 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:12.423 18:03:47 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:12.423 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.992 Initializing NVMe Controllers 00:28:14.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:14.992 Controller IO queue size 128, less than required. 00:28:14.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.992 Controller IO queue size 128, less than required. 00:28:14.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:14.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:14.992 Initialization complete. Launching workers. 00:28:14.992 ======================================================== 00:28:14.992 Latency(us) 00:28:14.992 Device Information : IOPS MiB/s Average min max 00:28:14.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 645.49 161.37 207440.34 111096.66 348715.81 00:28:14.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 563.49 140.87 234902.75 91952.98 350776.58 00:28:14.992 ======================================================== 00:28:14.992 Total : 1208.99 302.25 220240.23 91952.98 350776.58 00:28:14.992 00:28:14.992 18:03:49 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:14.992 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.992 No valid NVMe controllers or AIO or URING devices found 00:28:14.992 Initializing NVMe Controllers 00:28:14.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:14.992 Controller IO queue size 128, less than required. 00:28:14.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.992 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:14.992 Controller IO queue size 128, less than required. 00:28:14.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.992 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:14.992 WARNING: Some requested NVMe devices were skipped 00:28:14.992 18:03:49 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:14.992 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.519 Initializing NVMe Controllers 00:28:17.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:17.519 Controller IO queue size 128, less than required. 00:28:17.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:17.519 Controller IO queue size 128, less than required. 00:28:17.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:17.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:17.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:17.519 Initialization complete. Launching workers. 00:28:17.519 00:28:17.519 ==================== 00:28:17.519 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:17.519 TCP transport: 00:28:17.519 polls: 44821 00:28:17.519 idle_polls: 16261 00:28:17.519 sock_completions: 28560 00:28:17.519 nvme_completions: 2703 00:28:17.519 submitted_requests: 4076 00:28:17.519 queued_requests: 1 00:28:17.519 00:28:17.519 ==================== 00:28:17.519 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:17.519 TCP transport: 00:28:17.519 polls: 42426 00:28:17.519 idle_polls: 13371 00:28:17.519 sock_completions: 29055 00:28:17.519 nvme_completions: 2721 00:28:17.519 submitted_requests: 4094 00:28:17.519 queued_requests: 1 00:28:17.519 ======================================================== 00:28:17.519 Latency(us) 00:28:17.519 Device Information : IOPS MiB/s Average min max 00:28:17.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 675.02 168.75 199508.52 97379.59 293961.49 00:28:17.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 679.51 169.88 195313.03 79581.53 279378.67 00:28:17.519 ======================================================== 00:28:17.519 Total : 1354.53 338.63 197403.81 79581.53 293961.49 00:28:17.519 00:28:17.775 18:03:52 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:17.775 18:03:52 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:18.032 18:03:52 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:18.032 18:03:52 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:18.032 18:03:52 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:21.304 18:03:55 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=94d3ecd0-0fc1-48e0-9081-b8465f86c76f 00:28:21.304 18:03:55 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 94d3ecd0-0fc1-48e0-9081-b8465f86c76f 00:28:21.304 18:03:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=94d3ecd0-0fc1-48e0-9081-b8465f86c76f 00:28:21.304 18:03:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:21.304 18:03:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:21.304 18:03:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:21.304 18:03:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:21.304 18:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:21.304 { 00:28:21.304 "uuid": "94d3ecd0-0fc1-48e0-9081-b8465f86c76f", 00:28:21.304 "name": "lvs_0", 00:28:21.304 "base_bdev": "Nvme0n1", 00:28:21.304 "total_data_clusters": 238234, 00:28:21.304 "free_clusters": 238234, 00:28:21.304 "block_size": 512, 00:28:21.304 "cluster_size": 4194304 00:28:21.304 } 00:28:21.304 ]' 00:28:21.304 18:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="94d3ecd0-0fc1-48e0-9081-b8465f86c76f") .free_clusters' 00:28:21.561 18:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:21.561 18:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="94d3ecd0-0fc1-48e0-9081-b8465f86c76f") .cluster_size' 00:28:21.561 18:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:21.561 18:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:21.561 18:03:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:21.561 952936 00:28:21.561 18:03:56 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:21.561 18:03:56 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:21.561 18:03:56 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 94d3ecd0-0fc1-48e0-9081-b8465f86c76f lbd_0 20480 00:28:21.818 18:03:56 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=ace4c63c-290e-4d3c-8312-15a62c48a882 00:28:21.818 18:03:56 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore ace4c63c-290e-4d3c-8312-15a62c48a882 lvs_n_0 00:28:22.747 18:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=1d9b5a6a-b41d-4c3b-a6f7-4a5ebfad3e93 00:28:22.747 18:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 1d9b5a6a-b41d-4c3b-a6f7-4a5ebfad3e93 00:28:22.747 18:03:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=1d9b5a6a-b41d-4c3b-a6f7-4a5ebfad3e93 00:28:22.747 18:03:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:22.747 18:03:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:22.747 18:03:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:22.747 18:03:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:23.004 18:03:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:23.004 { 00:28:23.004 "uuid": "94d3ecd0-0fc1-48e0-9081-b8465f86c76f", 00:28:23.004 "name": "lvs_0", 00:28:23.004 "base_bdev": "Nvme0n1", 00:28:23.004 "total_data_clusters": 238234, 00:28:23.004 "free_clusters": 233114, 00:28:23.004 "block_size": 512, 00:28:23.004 "cluster_size": 4194304 00:28:23.004 }, 00:28:23.004 { 00:28:23.004 "uuid": "1d9b5a6a-b41d-4c3b-a6f7-4a5ebfad3e93", 00:28:23.004 "name": "lvs_n_0", 00:28:23.004 "base_bdev": "ace4c63c-290e-4d3c-8312-15a62c48a882", 00:28:23.004 "total_data_clusters": 5114, 00:28:23.004 "free_clusters": 5114, 00:28:23.004 "block_size": 512, 00:28:23.004 "cluster_size": 4194304 00:28:23.004 } 00:28:23.004 ]' 00:28:23.004 18:03:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="1d9b5a6a-b41d-4c3b-a6f7-4a5ebfad3e93") .free_clusters' 00:28:23.004 18:03:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:23.004 18:03:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="1d9b5a6a-b41d-4c3b-a6f7-4a5ebfad3e93") .cluster_size' 00:28:23.004 18:03:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:23.004 18:03:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:23.004 18:03:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:23.004 20456 00:28:23.004 18:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:23.004 18:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1d9b5a6a-b41d-4c3b-a6f7-4a5ebfad3e93 lbd_nest_0 20456 00:28:23.261 18:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=2f9549b2-97db-4814-8741-d5d240cd864c 00:28:23.261 18:03:57 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:23.519 18:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:23.519 18:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2f9549b2-97db-4814-8741-d5d240cd864c 00:28:23.776 18:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.034 18:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:24.034 18:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:24.034 18:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:24.034 18:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:24.034 18:03:58 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.034 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.260 Initializing NVMe Controllers 00:28:36.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:36.261 Initialization complete. Launching workers. 00:28:36.261 ======================================================== 00:28:36.261 Latency(us) 00:28:36.261 Device Information : IOPS MiB/s Average min max 00:28:36.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.20 0.02 22697.30 294.60 46168.41 00:28:36.261 ======================================================== 00:28:36.261 Total : 44.20 0.02 22697.30 294.60 46168.41 00:28:36.261 00:28:36.261 18:04:09 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:36.261 18:04:09 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:36.261 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.218 Initializing NVMe Controllers 00:28:46.219 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:46.219 Initialization complete. Launching workers. 00:28:46.219 ======================================================== 00:28:46.219 Latency(us) 00:28:46.219 Device Information : IOPS MiB/s Average min max 00:28:46.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.80 10.47 11940.39 6050.85 46899.24 00:28:46.219 ======================================================== 00:28:46.219 Total : 83.80 10.47 11940.39 6050.85 46899.24 00:28:46.219 00:28:46.219 18:04:19 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:46.219 18:04:19 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:46.219 18:04:19 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:46.219 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.177 Initializing NVMe Controllers 00:28:56.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:56.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:56.177 Initialization complete. Launching workers. 00:28:56.177 ======================================================== 00:28:56.177 Latency(us) 00:28:56.177 Device Information : IOPS MiB/s Average min max 00:28:56.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6434.60 3.14 4983.31 359.08 47845.08 00:28:56.177 ======================================================== 00:28:56.177 Total : 6434.60 3.14 4983.31 359.08 47845.08 00:28:56.177 00:28:56.177 18:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:56.177 18:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:56.177 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.138 Initializing NVMe Controllers 00:29:06.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:06.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:06.138 Initialization complete. Launching workers. 00:29:06.138 ======================================================== 00:29:06.138 Latency(us) 00:29:06.138 Device Information : IOPS MiB/s Average min max 00:29:06.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1222.60 152.82 26187.10 1880.46 53887.39 00:29:06.138 ======================================================== 00:29:06.138 Total : 1222.60 152.82 26187.10 1880.46 53887.39 00:29:06.138 00:29:06.138 18:04:39 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:06.138 18:04:39 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:06.138 18:04:39 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.138 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.136 Initializing NVMe Controllers 00:29:16.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.136 Controller IO queue size 128, less than required. 00:29:16.136 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:16.136 Initialization complete. Launching workers. 00:29:16.136 ======================================================== 00:29:16.136 Latency(us) 00:29:16.136 Device Information : IOPS MiB/s Average min max 00:29:16.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11689.90 5.71 10951.71 1749.83 24244.84 00:29:16.136 ======================================================== 00:29:16.136 Total : 11689.90 5.71 10951.71 1749.83 24244.84 00:29:16.136 00:29:16.136 18:04:50 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:16.136 18:04:50 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:16.136 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.096 Initializing NVMe Controllers 00:29:26.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:26.096 Controller IO queue size 128, less than required. 00:29:26.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:26.096 Initialization complete. Launching workers. 00:29:26.096 ======================================================== 00:29:26.096 Latency(us) 00:29:26.096 Device Information : IOPS MiB/s Average min max 00:29:26.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1154.60 144.32 111506.50 18527.40 229154.05 00:29:26.096 ======================================================== 00:29:26.096 Total : 1154.60 144.32 111506.50 18527.40 229154.05 00:29:26.096 00:29:26.096 18:05:00 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:26.353 18:05:00 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2f9549b2-97db-4814-8741-d5d240cd864c 00:29:26.917 18:05:01 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:27.174 18:05:01 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ace4c63c-290e-4d3c-8312-15a62c48a882 00:29:27.432 18:05:02 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:27.689 rmmod nvme_tcp 00:29:27.689 rmmod nvme_fabrics 00:29:27.689 rmmod nvme_keyring 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1043467 ']' 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1043467 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 1043467 ']' 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 1043467 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:27.689 18:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1043467 00:29:27.946 18:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:27.946 18:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:27.946 18:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1043467' 00:29:27.946 killing process with pid 1043467 00:29:27.946 18:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 1043467 00:29:27.946 18:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 1043467 00:29:29.316 18:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:29.316 18:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:29.316 18:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:29.316 18:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:29.316 18:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:29.316 18:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.316 18:05:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:29.316 18:05:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.844 18:05:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:31.844 00:29:31.844 real 1m31.017s 00:29:31.844 user 5m23.686s 00:29:31.844 sys 0m17.051s 00:29:31.844 18:05:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:31.844 18:05:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:31.844 ************************************ 00:29:31.844 END TEST nvmf_perf 00:29:31.844 ************************************ 00:29:31.844 18:05:06 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:31.844 18:05:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:31.844 18:05:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:31.844 18:05:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:31.844 ************************************ 00:29:31.844 START TEST nvmf_fio_host 00:29:31.844 ************************************ 00:29:31.844 18:05:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:31.844 * Looking for test storage... 00:29:31.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:31.844 18:05:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:31.844 18:05:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.844 18:05:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.844 18:05:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:31.845 18:05:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:33.748 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:33.748 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:33.748 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:33.749 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:33.749 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:33.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:29:33.749 00:29:33.749 --- 10.0.0.2 ping statistics --- 00:29:33.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.749 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:29:33.749 00:29:33.749 --- 10.0.0.1 ping statistics --- 00:29:33.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.749 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1055434 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1055434 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 1055434 ']' 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:33.749 18:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.749 [2024-07-20 18:05:08.541831] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:33.749 [2024-07-20 18:05:08.541931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.008 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.008 [2024-07-20 18:05:08.608998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.008 [2024-07-20 18:05:08.697035] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.008 [2024-07-20 18:05:08.697090] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.008 [2024-07-20 18:05:08.697111] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.008 [2024-07-20 18:05:08.697130] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.008 [2024-07-20 18:05:08.697147] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.008 [2024-07-20 18:05:08.697220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.008 [2024-07-20 18:05:08.697292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.008 [2024-07-20 18:05:08.697352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.008 [2024-07-20 18:05:08.697357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.265 18:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:34.265 18:05:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:34.265 18:05:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:34.265 [2024-07-20 18:05:09.058284] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.523 18:05:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:34.523 18:05:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.523 18:05:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.523 18:05:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:34.781 Malloc1 00:29:34.781 18:05:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:35.039 18:05:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:35.297 18:05:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.297 [2024-07-20 18:05:10.084317] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.554 18:05:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:35.812 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:35.813 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:35.813 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.813 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:35.813 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:35.813 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:35.813 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:35.813 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:35.813 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:35.813 18:05:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:35.813 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:35.813 fio-3.35 00:29:35.813 Starting 1 thread 00:29:35.813 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.364 00:29:38.364 test: (groupid=0, jobs=1): err= 0: pid=1055818: Sat Jul 20 18:05:13 2024 00:29:38.364 read: IOPS=8763, BW=34.2MiB/s (35.9MB/s)(68.6MiB/2005msec) 00:29:38.364 slat (usec): min=2, max=108, avg= 2.58, stdev= 1.38 00:29:38.364 clat (usec): min=4467, max=14942, avg=8472.65, stdev=1323.23 00:29:38.364 lat (usec): min=4470, max=14945, avg=8475.24, stdev=1323.21 00:29:38.364 clat percentiles (usec): 00:29:38.364 | 1.00th=[ 5866], 5.00th=[ 6718], 10.00th=[ 7111], 20.00th=[ 7439], 00:29:38.364 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 8225], 60.00th=[ 8455], 00:29:38.364 | 70.00th=[ 8848], 80.00th=[ 9503], 90.00th=[10421], 95.00th=[11076], 00:29:38.364 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13960], 99.95th=[14746], 00:29:38.364 | 99.99th=[14877] 00:29:38.364 bw ( KiB/s): min=34320, max=35712, per=99.87%, avg=35008.00, stdev=645.97, samples=4 00:29:38.364 iops : min= 8580, max= 8928, avg=8752.00, stdev=161.49, samples=4 00:29:38.364 write: IOPS=8770, BW=34.3MiB/s (35.9MB/s)(68.7MiB/2005msec); 0 zone resets 00:29:38.364 slat (nsec): min=2169, max=90041, avg=2686.19, stdev=1153.70 00:29:38.364 clat (usec): min=1116, max=10505, avg=6041.19, stdev=858.92 00:29:38.364 lat (usec): min=1122, max=10507, avg=6043.88, stdev=858.92 00:29:38.364 clat percentiles (usec): 00:29:38.364 | 1.00th=[ 3884], 5.00th=[ 4424], 10.00th=[ 4817], 20.00th=[ 5407], 00:29:38.364 | 30.00th=[ 5735], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6325], 00:29:38.364 | 70.00th=[ 6521], 80.00th=[ 6718], 90.00th=[ 6980], 95.00th=[ 7308], 00:29:38.364 | 99.00th=[ 7898], 99.50th=[ 8291], 99.90th=[ 9241], 99.95th=[ 9503], 00:29:38.364 | 99.99th=[10421] 00:29:38.364 bw ( KiB/s): min=34056, max=35720, per=99.91%, avg=35048.00, stdev=705.09, samples=4 00:29:38.364 iops : min= 8514, max= 8930, avg=8762.00, stdev=176.27, samples=4 00:29:38.364 lat (msec) : 2=0.01%, 4=0.62%, 10=92.32%, 20=7.06% 00:29:38.364 cpu : usr=59.28%, sys=32.39%, ctx=42, majf=0, minf=36 00:29:38.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:38.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:38.364 issued rwts: total=17571,17584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:38.364 00:29:38.364 Run status group 0 (all jobs): 00:29:38.364 READ: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.6MiB (72.0MB), run=2005-2005msec 00:29:38.364 WRITE: bw=34.3MiB/s (35.9MB/s), 34.3MiB/s-34.3MiB/s (35.9MB/s-35.9MB/s), io=68.7MiB (72.0MB), run=2005-2005msec 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:38.364 18:05:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:38.621 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:38.621 fio-3.35 00:29:38.621 Starting 1 thread 00:29:38.621 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.148 00:29:41.148 test: (groupid=0, jobs=1): err= 0: pid=1056241: Sat Jul 20 18:05:15 2024 00:29:41.148 read: IOPS=6504, BW=102MiB/s (107MB/s)(204MiB/2007msec) 00:29:41.148 slat (nsec): min=2894, max=93652, avg=3898.20, stdev=2095.07 00:29:41.148 clat (usec): min=3967, max=34358, avg=12288.75, stdev=2777.43 00:29:41.148 lat (usec): min=3970, max=34362, avg=12292.64, stdev=2777.57 00:29:41.148 clat percentiles (usec): 00:29:41.148 | 1.00th=[ 6390], 5.00th=[ 8160], 10.00th=[ 9110], 20.00th=[10028], 00:29:41.148 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12256], 60.00th=[12780], 00:29:41.148 | 70.00th=[13435], 80.00th=[14222], 90.00th=[15533], 95.00th=[16712], 00:29:41.148 | 99.00th=[22414], 99.50th=[24511], 99.90th=[25035], 99.95th=[25560], 00:29:41.148 | 99.99th=[28705] 00:29:41.148 bw ( KiB/s): min=39904, max=61248, per=49.32%, avg=51328.00, stdev=9754.74, samples=4 00:29:41.148 iops : min= 2494, max= 3828, avg=3208.00, stdev=609.67, samples=4 00:29:41.148 write: IOPS=3705, BW=57.9MiB/s (60.7MB/s)(105MiB/1817msec); 0 zone resets 00:29:41.148 slat (usec): min=30, max=193, avg=34.85, stdev= 6.46 00:29:41.148 clat (usec): min=5114, max=25811, avg=13216.52, stdev=2502.94 00:29:41.148 lat (usec): min=5145, max=25848, avg=13251.37, stdev=2504.19 00:29:41.148 clat percentiles (usec): 00:29:41.148 | 1.00th=[ 8979], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11338], 00:29:41.148 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12780], 60.00th=[13435], 00:29:41.148 | 70.00th=[14091], 80.00th=[15008], 90.00th=[16188], 95.00th=[17171], 00:29:41.148 | 99.00th=[23462], 99.50th=[24249], 99.90th=[25560], 99.95th=[25560], 00:29:41.148 | 99.99th=[25822] 00:29:41.148 bw ( KiB/s): min=41600, max=63936, per=90.03%, avg=53376.00, stdev=10213.27, samples=4 00:29:41.148 iops : min= 2600, max= 3996, avg=3336.00, stdev=638.33, samples=4 00:29:41.148 lat (msec) : 4=0.01%, 10=14.86%, 20=83.77%, 50=1.37% 00:29:41.148 cpu : usr=76.33%, sys=21.92%, ctx=17, majf=0, minf=56 00:29:41.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:29:41.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:41.148 issued rwts: total=13055,6733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:41.148 00:29:41.148 Run status group 0 (all jobs): 00:29:41.148 READ: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=204MiB (214MB), run=2007-2007msec 00:29:41.148 WRITE: bw=57.9MiB/s (60.7MB/s), 57.9MiB/s-57.9MiB/s (60.7MB/s-60.7MB/s), io=105MiB (110MB), run=1817-1817msec 00:29:41.148 18:05:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:41.148 18:05:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:41.148 18:05:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:41.148 18:05:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:41.148 18:05:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:41.148 18:05:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:29:41.148 18:05:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:41.148 18:05:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:41.148 18:05:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:41.148 18:05:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:41.148 18:05:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:29:41.148 18:05:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:29:44.421 Nvme0n1 00:29:44.421 18:05:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:47.707 18:05:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=9e65fd90-3059-43c4-9d73-93e8e126a6a3 00:29:47.707 18:05:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 9e65fd90-3059-43c4-9d73-93e8e126a6a3 00:29:47.707 18:05:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=9e65fd90-3059-43c4-9d73-93e8e126a6a3 00:29:47.707 18:05:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:47.707 18:05:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:29:47.707 18:05:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:29:47.707 18:05:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:47.707 18:05:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:47.707 { 00:29:47.707 "uuid": "9e65fd90-3059-43c4-9d73-93e8e126a6a3", 00:29:47.707 "name": "lvs_0", 00:29:47.707 "base_bdev": "Nvme0n1", 00:29:47.707 "total_data_clusters": 930, 00:29:47.707 "free_clusters": 930, 00:29:47.707 "block_size": 512, 00:29:47.707 "cluster_size": 1073741824 00:29:47.707 } 00:29:47.707 ]' 00:29:47.707 18:05:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="9e65fd90-3059-43c4-9d73-93e8e126a6a3") .free_clusters' 00:29:47.707 18:05:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:29:47.707 18:05:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="9e65fd90-3059-43c4-9d73-93e8e126a6a3") .cluster_size' 00:29:47.707 18:05:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:29:47.707 18:05:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:29:47.707 18:05:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:29:47.707 952320 00:29:47.707 18:05:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:47.964 1b87bbb4-c8b5-497f-9bd4-36aadf6594ab 00:29:47.964 18:05:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:48.222 18:05:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:48.479 18:05:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:48.736 18:05:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:48.994 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:48.994 fio-3.35 00:29:48.994 Starting 1 thread 00:29:48.994 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.518 00:29:51.518 test: (groupid=0, jobs=1): err= 0: pid=1057522: Sat Jul 20 18:05:26 2024 00:29:51.518 read: IOPS=5992, BW=23.4MiB/s (24.5MB/s)(47.0MiB/2007msec) 00:29:51.518 slat (usec): min=2, max=172, avg= 2.71, stdev= 2.43 00:29:51.518 clat (usec): min=1747, max=172024, avg=11862.06, stdev=11680.77 00:29:51.518 lat (usec): min=1751, max=172070, avg=11864.77, stdev=11681.12 00:29:51.518 clat percentiles (msec): 00:29:51.518 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:29:51.518 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:29:51.518 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:29:51.518 | 99.00th=[ 14], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:29:51.518 | 99.99th=[ 174] 00:29:51.518 bw ( KiB/s): min=16992, max=26312, per=99.65%, avg=23884.00, stdev=4597.02, samples=4 00:29:51.518 iops : min= 4248, max= 6578, avg=5971.00, stdev=1149.25, samples=4 00:29:51.518 write: IOPS=5972, BW=23.3MiB/s (24.5MB/s)(46.8MiB/2007msec); 0 zone resets 00:29:51.518 slat (usec): min=2, max=129, avg= 2.82, stdev= 1.93 00:29:51.518 clat (usec): min=493, max=170327, avg=9427.27, stdev=10989.58 00:29:51.518 lat (usec): min=496, max=170335, avg=9430.10, stdev=10989.92 00:29:51.518 clat percentiles (msec): 00:29:51.518 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:29:51.518 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:29:51.518 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:29:51.518 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 171], 00:29:51.518 | 99.99th=[ 171] 00:29:51.518 bw ( KiB/s): min=18048, max=25920, per=99.99%, avg=23888.00, stdev=3894.15, samples=4 00:29:51.518 iops : min= 4512, max= 6480, avg=5972.00, stdev=973.54, samples=4 00:29:51.518 lat (usec) : 500=0.01%, 750=0.01% 00:29:51.518 lat (msec) : 2=0.03%, 4=0.17%, 10=54.59%, 20=44.67%, 250=0.53% 00:29:51.518 cpu : usr=47.31%, sys=43.02%, ctx=70, majf=0, minf=36 00:29:51.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:51.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:51.518 issued rwts: total=12026,11987,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:51.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:51.518 00:29:51.518 Run status group 0 (all jobs): 00:29:51.518 READ: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=47.0MiB (49.3MB), run=2007-2007msec 00:29:51.518 WRITE: bw=23.3MiB/s (24.5MB/s), 23.3MiB/s-23.3MiB/s (24.5MB/s-24.5MB/s), io=46.8MiB (49.1MB), run=2007-2007msec 00:29:51.518 18:05:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:51.518 18:05:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:52.890 18:05:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=b1cbf28d-a68e-4e71-ab6d-f403a43cd432 00:29:52.890 18:05:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb b1cbf28d-a68e-4e71-ab6d-f403a43cd432 00:29:52.890 18:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=b1cbf28d-a68e-4e71-ab6d-f403a43cd432 00:29:52.890 18:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:52.890 18:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:29:52.890 18:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:29:52.890 18:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:52.890 18:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:52.890 { 00:29:52.890 "uuid": "9e65fd90-3059-43c4-9d73-93e8e126a6a3", 00:29:52.890 "name": "lvs_0", 00:29:52.890 "base_bdev": "Nvme0n1", 00:29:52.890 "total_data_clusters": 930, 00:29:52.890 "free_clusters": 0, 00:29:52.890 "block_size": 512, 00:29:52.890 "cluster_size": 1073741824 00:29:52.890 }, 00:29:52.890 { 00:29:52.890 "uuid": "b1cbf28d-a68e-4e71-ab6d-f403a43cd432", 00:29:52.890 "name": "lvs_n_0", 00:29:52.890 "base_bdev": "1b87bbb4-c8b5-497f-9bd4-36aadf6594ab", 00:29:52.890 "total_data_clusters": 237847, 00:29:52.890 "free_clusters": 237847, 00:29:52.890 "block_size": 512, 00:29:52.890 "cluster_size": 4194304 00:29:52.890 } 00:29:52.890 ]' 00:29:52.890 18:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="b1cbf28d-a68e-4e71-ab6d-f403a43cd432") .free_clusters' 00:29:53.149 18:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:29:53.149 18:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="b1cbf28d-a68e-4e71-ab6d-f403a43cd432") .cluster_size' 00:29:53.149 18:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:53.149 18:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:29:53.149 18:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:29:53.149 951388 00:29:53.149 18:05:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:53.715 24c8d05b-5a63-4c70-897d-7ecb8842c008 00:29:53.715 18:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:53.972 18:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:54.231 18:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:54.489 18:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:54.748 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:54.748 fio-3.35 00:29:54.748 Starting 1 thread 00:29:54.748 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.374 00:29:57.375 test: (groupid=0, jobs=1): err= 0: pid=1058253: Sat Jul 20 18:05:31 2024 00:29:57.375 read: IOPS=5676, BW=22.2MiB/s (23.2MB/s)(44.5MiB/2006msec) 00:29:57.375 slat (usec): min=2, max=171, avg= 2.91, stdev= 2.72 00:29:57.375 clat (usec): min=5919, max=18922, avg=13150.92, stdev=1874.61 00:29:57.375 lat (usec): min=5921, max=18924, avg=13153.83, stdev=1874.54 00:29:57.375 clat percentiles (usec): 00:29:57.375 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[10814], 20.00th=[11600], 00:29:57.375 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13042], 60.00th=[13566], 00:29:57.375 | 70.00th=[14091], 80.00th=[14877], 90.00th=[15664], 95.00th=[16319], 00:29:57.375 | 99.00th=[17433], 99.50th=[17957], 99.90th=[18482], 99.95th=[18744], 00:29:57.375 | 99.99th=[18744] 00:29:57.375 bw ( KiB/s): min=21440, max=23432, per=99.67%, avg=22632.00, stdev=853.69, samples=4 00:29:57.375 iops : min= 5360, max= 5858, avg=5658.00, stdev=213.42, samples=4 00:29:57.375 write: IOPS=5647, BW=22.1MiB/s (23.1MB/s)(44.3MiB/2006msec); 0 zone resets 00:29:57.375 slat (usec): min=2, max=136, avg= 3.00, stdev= 2.22 00:29:57.375 clat (usec): min=2837, max=14583, avg=9277.30, stdev=1391.13 00:29:57.375 lat (usec): min=2845, max=14586, avg=9280.31, stdev=1391.14 00:29:57.375 clat percentiles (usec): 00:29:57.375 | 1.00th=[ 5866], 5.00th=[ 6718], 10.00th=[ 7308], 20.00th=[ 8160], 00:29:57.375 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:29:57.375 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10814], 95.00th=[11338], 00:29:57.375 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13435], 99.95th=[13960], 00:29:57.375 | 99.99th=[14484] 00:29:57.375 bw ( KiB/s): min=22032, max=22840, per=99.80%, avg=22546.00, stdev=354.44, samples=4 00:29:57.375 iops : min= 5508, max= 5710, avg=5636.50, stdev=88.61, samples=4 00:29:57.375 lat (msec) : 4=0.05%, 10=36.30%, 20=63.65% 00:29:57.375 cpu : usr=59.50%, sys=34.81%, ctx=39, majf=0, minf=36 00:29:57.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:57.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:57.375 issued rwts: total=11387,11329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:57.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:57.375 00:29:57.375 Run status group 0 (all jobs): 00:29:57.375 READ: bw=22.2MiB/s (23.2MB/s), 22.2MiB/s-22.2MiB/s (23.2MB/s-23.2MB/s), io=44.5MiB (46.6MB), run=2006-2006msec 00:29:57.375 WRITE: bw=22.1MiB/s (23.1MB/s), 22.1MiB/s-22.1MiB/s (23.1MB/s-23.1MB/s), io=44.3MiB (46.4MB), run=2006-2006msec 00:29:57.375 18:05:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:57.375 18:05:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:57.375 18:05:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:01.572 18:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:01.572 18:05:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:04.851 18:05:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:04.851 18:05:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:06.757 18:05:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:06.757 18:05:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:06.757 18:05:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:06.757 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:06.757 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:06.757 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:06.758 rmmod nvme_tcp 00:30:06.758 rmmod nvme_fabrics 00:30:06.758 rmmod nvme_keyring 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1055434 ']' 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1055434 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 1055434 ']' 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 1055434 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1055434 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1055434' 00:30:06.758 killing process with pid 1055434 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 1055434 00:30:06.758 18:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 1055434 00:30:07.020 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:07.020 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:07.020 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:07.020 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:07.020 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:07.020 18:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.020 18:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:07.020 18:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.918 18:05:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:08.918 00:30:08.918 real 0m37.454s 00:30:08.918 user 2m23.062s 00:30:08.918 sys 0m6.980s 00:30:08.918 18:05:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:08.918 18:05:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.918 ************************************ 00:30:08.918 END TEST nvmf_fio_host 00:30:08.918 ************************************ 00:30:08.918 18:05:43 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:08.918 18:05:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:08.918 18:05:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:08.918 18:05:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.918 ************************************ 00:30:08.919 START TEST nvmf_failover 00:30:08.919 ************************************ 00:30:08.919 18:05:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:09.175 * Looking for test storage... 00:30:09.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.175 18:05:43 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:09.176 18:05:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:11.070 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:11.070 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:11.070 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:11.070 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.070 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:11.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:30:11.328 00:30:11.328 --- 10.0.0.2 ping statistics --- 00:30:11.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.328 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:30:11.328 00:30:11.328 --- 10.0.0.1 ping statistics --- 00:30:11.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.328 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1061632 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1061632 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1061632 ']' 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:11.328 18:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:11.328 [2024-07-20 18:05:46.022765] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:11.328 [2024-07-20 18:05:46.022882] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.328 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.328 [2024-07-20 18:05:46.093686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:11.585 [2024-07-20 18:05:46.185078] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.585 [2024-07-20 18:05:46.185145] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.585 [2024-07-20 18:05:46.185173] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.585 [2024-07-20 18:05:46.185194] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.585 [2024-07-20 18:05:46.185214] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.585 [2024-07-20 18:05:46.185313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.585 [2024-07-20 18:05:46.185434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.585 [2024-07-20 18:05:46.185441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.585 18:05:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:11.585 18:05:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:11.585 18:05:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:11.585 18:05:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:11.585 18:05:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:11.585 18:05:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.585 18:05:46 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:11.842 [2024-07-20 18:05:46.527822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.842 18:05:46 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:12.099 Malloc0 00:30:12.099 18:05:46 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:12.356 18:05:47 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.612 18:05:47 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.870 [2024-07-20 18:05:47.526127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.870 18:05:47 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:13.126 [2024-07-20 18:05:47.766784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:13.127 18:05:47 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:13.384 [2024-07-20 18:05:48.011615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:13.384 18:05:48 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1061841 00:30:13.384 18:05:48 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:13.384 18:05:48 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:13.384 18:05:48 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1061841 /var/tmp/bdevperf.sock 00:30:13.384 18:05:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1061841 ']' 00:30:13.384 18:05:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:13.384 18:05:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:13.384 18:05:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:13.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:13.384 18:05:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:13.384 18:05:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:13.641 18:05:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:13.641 18:05:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:13.641 18:05:48 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:13.899 NVMe0n1 00:30:13.899 18:05:48 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:14.502 00:30:14.502 18:05:49 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1061941 00:30:14.502 18:05:49 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:14.502 18:05:49 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:15.462 18:05:50 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.719 [2024-07-20 18:05:50.349625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 [2024-07-20 18:05:50.349697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 [2024-07-20 18:05:50.349714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 [2024-07-20 18:05:50.349726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 [2024-07-20 18:05:50.349739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 [2024-07-20 18:05:50.349751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 [2024-07-20 18:05:50.349763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 [2024-07-20 18:05:50.349775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 [2024-07-20 18:05:50.349801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 [2024-07-20 18:05:50.349816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 [2024-07-20 18:05:50.349827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 [2024-07-20 18:05:50.349839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 [2024-07-20 18:05:50.349852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 [2024-07-20 18:05:50.349864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683090 is same with the state(5) to be set 00:30:15.720 18:05:50 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:18.995 18:05:53 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:19.252 00:30:19.252 18:05:53 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:19.509 [2024-07-20 18:05:54.142515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.142994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.509 [2024-07-20 18:05:54.143704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.143723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.143744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.143764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.143784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.143821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.143842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.143863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.143885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.143904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.143926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.143947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.143966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.143989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.144008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 [2024-07-20 18:05:54.144030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684610 is same with the state(5) to be set 00:30:19.510 18:05:54 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:22.786 18:05:57 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.787 [2024-07-20 18:05:57.431853] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.787 18:05:57 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:23.717 18:05:58 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:23.974 [2024-07-20 18:05:58.691663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.691727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.691751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.691783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.691814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.691833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.691852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.691870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.691888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.691907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.691927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.691946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.691966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.691986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 [2024-07-20 18:05:58.692385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684980 is same with the state(5) to be set 00:30:23.974 18:05:58 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1061941 00:30:30.527 0 00:30:30.527 18:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1061841 00:30:30.527 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1061841 ']' 00:30:30.527 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1061841 00:30:30.527 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:30.527 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:30.527 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1061841 00:30:30.527 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:30.527 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:30.527 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1061841' 00:30:30.527 killing process with pid 1061841 00:30:30.527 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1061841 00:30:30.527 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1061841 00:30:30.527 18:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:30.527 [2024-07-20 18:05:48.074247] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:30.527 [2024-07-20 18:05:48.074334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061841 ] 00:30:30.527 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.527 [2024-07-20 18:05:48.137411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.527 [2024-07-20 18:05:48.227926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.527 Running I/O for 15 seconds... 00:30:30.527 [2024-07-20 18:05:50.350338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.527 [2024-07-20 18:05:50.350873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.527 [2024-07-20 18:05:50.350902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.527 [2024-07-20 18:05:50.350931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.527 [2024-07-20 18:05:50.350961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.350976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.527 [2024-07-20 18:05:50.350990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.351005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.527 [2024-07-20 18:05:50.351019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.351034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.527 [2024-07-20 18:05:50.351047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.351062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.527 [2024-07-20 18:05:50.351076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.351092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.527 [2024-07-20 18:05:50.351105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.351137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.527 [2024-07-20 18:05:50.351155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.351170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.527 [2024-07-20 18:05:50.351184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.351199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.527 [2024-07-20 18:05:50.351212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.527 [2024-07-20 18:05:50.351227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.528 [2024-07-20 18:05:50.351529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.528 [2024-07-20 18:05:50.351557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.528 [2024-07-20 18:05:50.351585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.528 [2024-07-20 18:05:50.351612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.528 [2024-07-20 18:05:50.351640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.528 [2024-07-20 18:05:50.351669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.528 [2024-07-20 18:05:50.351697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.528 [2024-07-20 18:05:50.351725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.351981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.351994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.528 [2024-07-20 18:05:50.352493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.528 [2024-07-20 18:05:50.352507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.352976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.352989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.529 [2024-07-20 18:05:50.353134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.529 [2024-07-20 18:05:50.353162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.529 [2024-07-20 18:05:50.353191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.529 [2024-07-20 18:05:50.353219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.529 [2024-07-20 18:05:50.353247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.529 [2024-07-20 18:05:50.353275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.529 [2024-07-20 18:05:50.353303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.529 [2024-07-20 18:05:50.353331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.529 [2024-07-20 18:05:50.353774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.529 [2024-07-20 18:05:50.353789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.353833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:50.353849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.353871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:50.353885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.353901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:50.353914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.353930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:50.353944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.353959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:50.353973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.353988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:50.354002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:50.354031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:50.354061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.530 [2024-07-20 18:05:50.354090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.530 [2024-07-20 18:05:50.354133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.530 [2024-07-20 18:05:50.354162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.530 [2024-07-20 18:05:50.354189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.530 [2024-07-20 18:05:50.354222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.530 [2024-07-20 18:05:50.354251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.530 [2024-07-20 18:05:50.354279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.530 [2024-07-20 18:05:50.354322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.530 [2024-07-20 18:05:50.354334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79240 len:8 PRP1 0x0 PRP2 0x0 00:30:30.530 [2024-07-20 18:05:50.354347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354409] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2326ef0 was disconnected and freed. reset controller. 00:30:30.530 [2024-07-20 18:05:50.354426] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:30.530 [2024-07-20 18:05:50.354474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.530 [2024-07-20 18:05:50.354493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.530 [2024-07-20 18:05:50.354522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.530 [2024-07-20 18:05:50.354549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.530 [2024-07-20 18:05:50.354575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:50.354589] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:30.530 [2024-07-20 18:05:50.357878] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:30.530 [2024-07-20 18:05:50.357916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2307740 (9): Bad file descriptor 00:30:30.530 [2024-07-20 18:05:50.478899] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:30.530 [2024-07-20 18:05:54.146003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.530 [2024-07-20 18:05:54.146045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.530 [2024-07-20 18:05:54.146577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.530 [2024-07-20 18:05:54.146591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.146605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.146618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.146633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.146646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.146660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.146673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.146688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.146701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.146715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.146729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.146743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.146756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.146771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.146808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.146824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.146853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.146870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.146888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.146904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.146918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.146933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.146947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.146963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.146977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.146992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.531 [2024-07-20 18:05:54.147571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.531 [2024-07-20 18:05:54.147598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.531 [2024-07-20 18:05:54.147633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.531 [2024-07-20 18:05:54.147648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.532 [2024-07-20 18:05:54.147661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.147676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.532 [2024-07-20 18:05:54.147689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.147703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.532 [2024-07-20 18:05:54.147716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.147730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.532 [2024-07-20 18:05:54.147743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.147758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.532 [2024-07-20 18:05:54.147771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.147785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.147823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.147840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.147854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.147869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.147883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.147898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.147912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.147926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.147940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.147955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.147968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.147983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.147996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.532 [2024-07-20 18:05:54.148743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.532 [2024-07-20 18:05:54.148815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119944 len:8 PRP1 0x0 PRP2 0x0 00:30:30.532 [2024-07-20 18:05:54.148831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.532 [2024-07-20 18:05:54.148862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.532 [2024-07-20 18:05:54.148874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119952 len:8 PRP1 0x0 PRP2 0x0 00:30:30.532 [2024-07-20 18:05:54.148887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.532 [2024-07-20 18:05:54.148912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.532 [2024-07-20 18:05:54.148923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119960 len:8 PRP1 0x0 PRP2 0x0 00:30:30.532 [2024-07-20 18:05:54.148937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.532 [2024-07-20 18:05:54.148951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.532 [2024-07-20 18:05:54.148962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.532 [2024-07-20 18:05:54.148973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119968 len:8 PRP1 0x0 PRP2 0x0 00:30:30.532 [2024-07-20 18:05:54.148987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119976 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119984 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119992 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120000 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120008 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120016 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120024 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120032 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120040 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120048 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120056 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120064 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120072 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120080 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120088 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120096 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120104 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120112 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120120 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.149949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.149962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.149974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.149989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120128 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.150003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.150016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.150028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.150039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120136 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.150052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.150066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.150077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.150088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120144 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.150116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.150129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.150141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.150152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120152 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.150164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.150177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.150188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.150200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120160 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.150213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.533 [2024-07-20 18:05:54.150227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.533 [2024-07-20 18:05:54.150238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.533 [2024-07-20 18:05:54.150249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120168 len:8 PRP1 0x0 PRP2 0x0 00:30:30.533 [2024-07-20 18:05:54.150268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.534 [2024-07-20 18:05:54.150292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.534 [2024-07-20 18:05:54.150304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120176 len:8 PRP1 0x0 PRP2 0x0 00:30:30.534 [2024-07-20 18:05:54.150317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.534 [2024-07-20 18:05:54.150341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.534 [2024-07-20 18:05:54.150352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120184 len:8 PRP1 0x0 PRP2 0x0 00:30:30.534 [2024-07-20 18:05:54.150365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.534 [2024-07-20 18:05:54.150394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.534 [2024-07-20 18:05:54.150405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120192 len:8 PRP1 0x0 PRP2 0x0 00:30:30.534 [2024-07-20 18:05:54.150418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.534 [2024-07-20 18:05:54.150442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.534 [2024-07-20 18:05:54.150454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120200 len:8 PRP1 0x0 PRP2 0x0 00:30:30.534 [2024-07-20 18:05:54.150467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.534 [2024-07-20 18:05:54.150492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.534 [2024-07-20 18:05:54.150503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120208 len:8 PRP1 0x0 PRP2 0x0 00:30:30.534 [2024-07-20 18:05:54.150522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.534 [2024-07-20 18:05:54.150547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.534 [2024-07-20 18:05:54.150558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120216 len:8 PRP1 0x0 PRP2 0x0 00:30:30.534 [2024-07-20 18:05:54.150571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.534 [2024-07-20 18:05:54.150595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.534 [2024-07-20 18:05:54.150606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120224 len:8 PRP1 0x0 PRP2 0x0 00:30:30.534 [2024-07-20 18:05:54.150624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.534 [2024-07-20 18:05:54.150649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.534 [2024-07-20 18:05:54.150660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120232 len:8 PRP1 0x0 PRP2 0x0 00:30:30.534 [2024-07-20 18:05:54.150673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.534 [2024-07-20 18:05:54.150696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.534 [2024-07-20 18:05:54.150707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120240 len:8 PRP1 0x0 PRP2 0x0 00:30:30.534 [2024-07-20 18:05:54.150720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150791] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2328f20 was disconnected and freed. reset controller. 00:30:30.534 [2024-07-20 18:05:54.150817] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:30.534 [2024-07-20 18:05:54.150852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.534 [2024-07-20 18:05:54.150874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.534 [2024-07-20 18:05:54.150910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.534 [2024-07-20 18:05:54.150943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.534 [2024-07-20 18:05:54.150970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:54.150984] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:30.534 [2024-07-20 18:05:54.151036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2307740 (9): Bad file descriptor 00:30:30.534 [2024-07-20 18:05:54.154273] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:30.534 [2024-07-20 18:05:54.318656] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:30.534 [2024-07-20 18:05:58.693167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.534 [2024-07-20 18:05:58.693788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.534 [2024-07-20 18:05:58.693819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.693834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.693866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.693880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.693895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.693909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.693925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.693938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.693954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.693968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.693984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.693998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.535 [2024-07-20 18:05:58.694938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.535 [2024-07-20 18:05:58.694953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.694966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.694985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.695976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.695989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.696004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.536 [2024-07-20 18:05:58.696018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.696033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.536 [2024-07-20 18:05:58.696046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.696061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.536 [2024-07-20 18:05:58.696077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.696107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.536 [2024-07-20 18:05:58.696124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.696139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.536 [2024-07-20 18:05:58.696157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.696172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.536 [2024-07-20 18:05:58.696185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.696200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.536 [2024-07-20 18:05:58.696213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.536 [2024-07-20 18:05:58.696227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.536 [2024-07-20 18:05:58.696240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.696973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.696988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.697002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.697017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.697031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.697047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.537 [2024-07-20 18:05:58.697061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.697089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.537 [2024-07-20 18:05:58.697120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.537 [2024-07-20 18:05:58.697132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97600 len:8 PRP1 0x0 PRP2 0x0 00:30:30.537 [2024-07-20 18:05:58.697146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.697202] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x232ad70 was disconnected and freed. reset controller. 00:30:30.537 [2024-07-20 18:05:58.697218] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:30.537 [2024-07-20 18:05:58.697266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.537 [2024-07-20 18:05:58.697295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.697319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.537 [2024-07-20 18:05:58.697333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.697347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.537 [2024-07-20 18:05:58.697360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.697374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.537 [2024-07-20 18:05:58.697387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.537 [2024-07-20 18:05:58.697404] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:30.537 [2024-07-20 18:05:58.697443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2307740 (9): Bad file descriptor 00:30:30.537 [2024-07-20 18:05:58.700743] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:30.537 [2024-07-20 18:05:58.866087] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:30.537 00:30:30.537 Latency(us) 00:30:30.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.537 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:30.537 Verification LBA range: start 0x0 length 0x4000 00:30:30.537 NVMe0n1 : 15.01 8738.50 34.13 1189.47 0.00 12865.65 1074.06 18738.44 00:30:30.537 =================================================================================================================== 00:30:30.537 Total : 8738.50 34.13 1189.47 0.00 12865.65 1074.06 18738.44 00:30:30.537 Received shutdown signal, test time was about 15.000000 seconds 00:30:30.537 00:30:30.537 Latency(us) 00:30:30.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.537 =================================================================================================================== 00:30:30.537 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:30.537 18:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:30.537 18:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:30.537 18:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:30.537 18:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1063888 00:30:30.537 18:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:30.537 18:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1063888 /var/tmp/bdevperf.sock 00:30:30.537 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1063888 ']' 00:30:30.537 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:30.537 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:30.538 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:30.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:30.538 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:30.538 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:30.538 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:30.538 18:06:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:30.538 18:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:30.538 [2024-07-20 18:06:05.042405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:30.538 18:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:30.538 [2024-07-20 18:06:05.291038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:30.538 18:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:31.102 NVMe0n1 00:30:31.102 18:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:31.359 00:30:31.359 18:06:06 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:31.617 00:30:31.617 18:06:06 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:31.617 18:06:06 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:31.874 18:06:06 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:32.132 18:06:06 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:35.408 18:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:35.408 18:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:35.408 18:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1065059 00:30:35.408 18:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:35.408 18:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1065059 00:30:36.824 0 00:30:36.824 18:06:11 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:36.824 [2024-07-20 18:06:04.552445] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:36.824 [2024-07-20 18:06:04.552533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063888 ] 00:30:36.824 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.824 [2024-07-20 18:06:04.614937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.824 [2024-07-20 18:06:04.698250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.824 [2024-07-20 18:06:06.888512] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:36.824 [2024-07-20 18:06:06.888602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:36.824 [2024-07-20 18:06:06.888626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.824 [2024-07-20 18:06:06.888643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:36.824 [2024-07-20 18:06:06.888657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.824 [2024-07-20 18:06:06.888671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:36.824 [2024-07-20 18:06:06.888685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.824 [2024-07-20 18:06:06.888706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:36.824 [2024-07-20 18:06:06.888720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.824 [2024-07-20 18:06:06.888734] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:36.824 [2024-07-20 18:06:06.888783] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:36.824 [2024-07-20 18:06:06.888826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2457740 (9): Bad file descriptor 00:30:36.824 [2024-07-20 18:06:06.991131] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:36.824 Running I/O for 1 seconds... 00:30:36.824 00:30:36.824 Latency(us) 00:30:36.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.824 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:36.824 Verification LBA range: start 0x0 length 0x4000 00:30:36.824 NVMe0n1 : 1.01 8724.87 34.08 0.00 0.00 14608.74 1844.72 16699.54 00:30:36.824 =================================================================================================================== 00:30:36.824 Total : 8724.87 34.08 0.00 0.00 14608.74 1844.72 16699.54 00:30:36.824 18:06:11 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:36.824 18:06:11 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:36.824 18:06:11 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:37.080 18:06:11 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:37.080 18:06:11 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:37.338 18:06:12 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:37.594 18:06:12 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:40.889 18:06:15 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:40.889 18:06:15 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:40.889 18:06:15 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1063888 00:30:40.889 18:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1063888 ']' 00:30:40.889 18:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1063888 00:30:40.889 18:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:40.890 18:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:40.890 18:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1063888 00:30:40.890 18:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:40.890 18:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:40.890 18:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1063888' 00:30:40.890 killing process with pid 1063888 00:30:40.890 18:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1063888 00:30:40.890 18:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1063888 00:30:41.147 18:06:15 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:41.147 18:06:15 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.404 18:06:16 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:41.404 18:06:16 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:41.404 18:06:16 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:41.404 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:41.404 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:41.404 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:41.404 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:41.404 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:41.404 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:41.404 rmmod nvme_tcp 00:30:41.404 rmmod nvme_fabrics 00:30:41.404 rmmod nvme_keyring 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1061632 ']' 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1061632 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1061632 ']' 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1061632 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1061632 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1061632' 00:30:41.661 killing process with pid 1061632 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1061632 00:30:41.661 18:06:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1061632 00:30:41.919 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:41.919 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:41.919 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:41.919 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:41.919 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:41.919 18:06:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.919 18:06:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:41.919 18:06:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.819 18:06:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:43.819 00:30:43.819 real 0m34.805s 00:30:43.819 user 1m58.190s 00:30:43.819 sys 0m6.969s 00:30:43.819 18:06:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:43.819 18:06:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:43.819 ************************************ 00:30:43.819 END TEST nvmf_failover 00:30:43.819 ************************************ 00:30:43.819 18:06:18 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:43.819 18:06:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:43.819 18:06:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:43.819 18:06:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.819 ************************************ 00:30:43.819 START TEST nvmf_host_discovery 00:30:43.819 ************************************ 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:43.819 * Looking for test storage... 00:30:43.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:43.819 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.820 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.820 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.820 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.820 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.820 18:06:18 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.820 18:06:18 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.820 18:06:18 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.820 18:06:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.820 18:06:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.820 18:06:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.820 18:06:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:43.820 18:06:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:44.078 18:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:45.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:45.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:45.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:45.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:45.978 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:45.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:30:45.979 00:30:45.979 --- 10.0.0.2 ping statistics --- 00:30:45.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.979 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:30:45.979 00:30:45.979 --- 10.0.0.1 ping statistics --- 00:30:45.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.979 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:45.979 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:46.237 18:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:46.237 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:46.237 18:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:46.237 18:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.237 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1067731 00:30:46.237 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:46.237 18:06:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1067731 00:30:46.237 18:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1067731 ']' 00:30:46.237 18:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.237 18:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:46.237 18:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.237 18:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:46.237 18:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.237 [2024-07-20 18:06:20.834900] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:46.237 [2024-07-20 18:06:20.834983] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.237 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.237 [2024-07-20 18:06:20.901268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.237 [2024-07-20 18:06:20.990323] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.237 [2024-07-20 18:06:20.990388] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.237 [2024-07-20 18:06:20.990413] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:46.237 [2024-07-20 18:06:20.990426] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:46.237 [2024-07-20 18:06:20.990438] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.237 [2024-07-20 18:06:20.990469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.496 [2024-07-20 18:06:21.124252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.496 [2024-07-20 18:06:21.132458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.496 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.496 null0 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.497 null1 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1067795 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1067795 /tmp/host.sock 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1067795 ']' 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:46.497 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:46.497 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.497 [2024-07-20 18:06:21.203490] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:46.497 [2024-07-20 18:06:21.203555] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067795 ] 00:30:46.497 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.497 [2024-07-20 18:06:21.264645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.755 [2024-07-20 18:06:21.355654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:46.755 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.014 [2024-07-20 18:06:21.758124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:47.014 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:30:47.273 18:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:30:47.838 [2024-07-20 18:06:22.494579] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:47.838 [2024-07-20 18:06:22.494611] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:47.838 [2024-07-20 18:06:22.494632] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:47.838 [2024-07-20 18:06:22.582956] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:48.096 [2024-07-20 18:06:22.807157] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:48.096 [2024-07-20 18:06:22.807186] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:48.354 18:06:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:48.354 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.613 [2024-07-20 18:06:23.182210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:48.613 [2024-07-20 18:06:23.183033] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:48.613 [2024-07-20 18:06:23.183095] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:48.613 18:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:30:48.613 [2024-07-20 18:06:23.309461] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:48.613 [2024-07-20 18:06:23.367103] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:48.613 [2024-07-20 18:06:23.367137] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:48.613 [2024-07-20 18:06:23.367149] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:49.545 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:49.545 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:49.545 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:49.545 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:49.545 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:49.545 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.545 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:49.545 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.545 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:49.545 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.823 [2024-07-20 18:06:24.398360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.823 [2024-07-20 18:06:24.398396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.823 [2024-07-20 18:06:24.398415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.823 [2024-07-20 18:06:24.398431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.823 [2024-07-20 18:06:24.398447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.823 [2024-07-20 18:06:24.398462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.823 [2024-07-20 18:06:24.398478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:49.823 [2024-07-20 18:06:24.398493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:49.823 [2024-07-20 18:06:24.398507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23adda0 is same with the state(5) to be set 00:30:49.823 [2024-07-20 18:06:24.398578] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:49.823 [2024-07-20 18:06:24.398610] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:49.823 [2024-07-20 18:06:24.408363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23adda0 (9): Bad file descriptor 00:30:49.823 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.823 [2024-07-20 18:06:24.418406] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.823 [2024-07-20 18:06:24.418715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.823 [2024-07-20 18:06:24.418748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23adda0 with addr=10.0.0.2, port=4420 00:30:49.823 [2024-07-20 18:06:24.418767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23adda0 is same with the state(5) to be set 00:30:49.823 [2024-07-20 18:06:24.418801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23adda0 (9): Bad file descriptor 00:30:49.823 [2024-07-20 18:06:24.418864] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.823 [2024-07-20 18:06:24.418885] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.823 [2024-07-20 18:06:24.418901] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.823 [2024-07-20 18:06:24.418921] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.823 [2024-07-20 18:06:24.428487] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.823 [2024-07-20 18:06:24.428767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.823 [2024-07-20 18:06:24.428802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23adda0 with addr=10.0.0.2, port=4420 00:30:49.823 [2024-07-20 18:06:24.428821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23adda0 is same with the state(5) to be set 00:30:49.823 [2024-07-20 18:06:24.428844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23adda0 (9): Bad file descriptor 00:30:49.823 [2024-07-20 18:06:24.428880] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.823 [2024-07-20 18:06:24.428899] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.823 [2024-07-20 18:06:24.428913] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.823 [2024-07-20 18:06:24.428932] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.823 [2024-07-20 18:06:24.438560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.823 [2024-07-20 18:06:24.438854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.824 [2024-07-20 18:06:24.438883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23adda0 with addr=10.0.0.2, port=4420 00:30:49.824 [2024-07-20 18:06:24.438900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23adda0 is same with the state(5) to be set 00:30:49.824 [2024-07-20 18:06:24.438923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23adda0 (9): Bad file descriptor 00:30:49.824 [2024-07-20 18:06:24.438960] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.824 [2024-07-20 18:06:24.438980] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.824 [2024-07-20 18:06:24.438995] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.824 [2024-07-20 18:06:24.439014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:49.824 [2024-07-20 18:06:24.448631] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.824 [2024-07-20 18:06:24.448969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.824 [2024-07-20 18:06:24.448999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23adda0 with addr=10.0.0.2, port=4420 00:30:49.824 [2024-07-20 18:06:24.449016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23adda0 is same with the state(5) to be set 00:30:49.824 [2024-07-20 18:06:24.449040] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23adda0 (9): Bad file descriptor 00:30:49.824 [2024-07-20 18:06:24.449077] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.824 [2024-07-20 18:06:24.449097] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.824 [2024-07-20 18:06:24.449112] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.824 [2024-07-20 18:06:24.449146] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.824 [2024-07-20 18:06:24.458705] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.824 [2024-07-20 18:06:24.458975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.824 [2024-07-20 18:06:24.459004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23adda0 with addr=10.0.0.2, port=4420 00:30:49.824 [2024-07-20 18:06:24.459022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23adda0 is same with the state(5) to be set 00:30:49.824 [2024-07-20 18:06:24.459045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23adda0 (9): Bad file descriptor 00:30:49.824 [2024-07-20 18:06:24.459066] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.824 [2024-07-20 18:06:24.459080] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.824 [2024-07-20 18:06:24.459094] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.824 [2024-07-20 18:06:24.459128] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.824 [2024-07-20 18:06:24.468791] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.824 [2024-07-20 18:06:24.469069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.824 [2024-07-20 18:06:24.469097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23adda0 with addr=10.0.0.2, port=4420 00:30:49.824 [2024-07-20 18:06:24.469114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23adda0 is same with the state(5) to be set 00:30:49.824 [2024-07-20 18:06:24.469143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23adda0 (9): Bad file descriptor 00:30:49.824 [2024-07-20 18:06:24.469164] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.824 [2024-07-20 18:06:24.469179] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.824 [2024-07-20 18:06:24.469210] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.824 [2024-07-20 18:06:24.469228] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.824 [2024-07-20 18:06:24.478869] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.824 [2024-07-20 18:06:24.479158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.824 [2024-07-20 18:06:24.479185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23adda0 with addr=10.0.0.2, port=4420 00:30:49.824 [2024-07-20 18:06:24.479202] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23adda0 is same with the state(5) to be set 00:30:49.824 [2024-07-20 18:06:24.479225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23adda0 (9): Bad file descriptor 00:30:49.824 [2024-07-20 18:06:24.479261] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:49.824 [2024-07-20 18:06:24.479276] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:49.824 [2024-07-20 18:06:24.479289] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:49.824 [2024-07-20 18:06:24.479339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:30:49.824 [2024-07-20 18:06:24.486318] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:49.824 [2024-07-20 18:06:24.486350] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:49.824 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.082 18:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.014 [2024-07-20 18:06:25.712021] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:51.014 [2024-07-20 18:06:25.712042] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:51.014 [2024-07-20 18:06:25.712061] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:51.014 [2024-07-20 18:06:25.799363] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:51.272 [2024-07-20 18:06:25.905811] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:51.272 [2024-07-20 18:06:25.905863] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.272 request: 00:30:51.272 { 00:30:51.272 "name": "nvme", 00:30:51.272 "trtype": "tcp", 00:30:51.272 "traddr": "10.0.0.2", 00:30:51.272 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:51.272 "adrfam": "ipv4", 00:30:51.272 "trsvcid": "8009", 00:30:51.272 "wait_for_attach": true, 00:30:51.272 "method": "bdev_nvme_start_discovery", 00:30:51.272 "req_id": 1 00:30:51.272 } 00:30:51.272 Got JSON-RPC error response 00:30:51.272 response: 00:30:51.272 { 00:30:51.272 "code": -17, 00:30:51.272 "message": "File exists" 00:30:51.272 } 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:51.272 18:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.272 request: 00:30:51.272 { 00:30:51.272 "name": "nvme_second", 00:30:51.272 "trtype": "tcp", 00:30:51.272 "traddr": "10.0.0.2", 00:30:51.272 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:51.272 "adrfam": "ipv4", 00:30:51.272 "trsvcid": "8009", 00:30:51.272 "wait_for_attach": true, 00:30:51.272 "method": "bdev_nvme_start_discovery", 00:30:51.272 "req_id": 1 00:30:51.272 } 00:30:51.272 Got JSON-RPC error response 00:30:51.272 response: 00:30:51.272 { 00:30:51.272 "code": -17, 00:30:51.272 "message": "File exists" 00:30:51.272 } 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:51.272 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.534 18:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:52.523 [2024-07-20 18:06:27.125421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.523 [2024-07-20 18:06:27.125473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23df5f0 with addr=10.0.0.2, port=8010 00:30:52.523 [2024-07-20 18:06:27.125501] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:52.523 [2024-07-20 18:06:27.125517] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:52.523 [2024-07-20 18:06:27.125532] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:53.455 [2024-07-20 18:06:28.127779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.455 [2024-07-20 18:06:28.127842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23df980 with addr=10.0.0.2, port=8010 00:30:53.455 [2024-07-20 18:06:28.127865] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:53.455 [2024-07-20 18:06:28.127878] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:53.455 [2024-07-20 18:06:28.127890] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:54.388 [2024-07-20 18:06:29.129949] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:54.388 request: 00:30:54.388 { 00:30:54.388 "name": "nvme_second", 00:30:54.388 "trtype": "tcp", 00:30:54.388 "traddr": "10.0.0.2", 00:30:54.388 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:54.388 "adrfam": "ipv4", 00:30:54.388 "trsvcid": "8010", 00:30:54.388 "attach_timeout_ms": 3000, 00:30:54.388 "method": "bdev_nvme_start_discovery", 00:30:54.388 "req_id": 1 00:30:54.388 } 00:30:54.388 Got JSON-RPC error response 00:30:54.388 response: 00:30:54.388 { 00:30:54.388 "code": -110, 00:30:54.388 "message": "Connection timed out" 00:30:54.388 } 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1067795 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:54.388 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:54.388 rmmod nvme_tcp 00:30:54.647 rmmod nvme_fabrics 00:30:54.647 rmmod nvme_keyring 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1067731 ']' 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1067731 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 1067731 ']' 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 1067731 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1067731 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1067731' 00:30:54.647 killing process with pid 1067731 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 1067731 00:30:54.647 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 1067731 00:30:54.906 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:54.906 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:54.906 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:54.906 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:54.906 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:54.906 18:06:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.906 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:54.906 18:06:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:56.832 00:30:56.832 real 0m12.956s 00:30:56.832 user 0m18.552s 00:30:56.832 sys 0m2.745s 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.832 ************************************ 00:30:56.832 END TEST nvmf_host_discovery 00:30:56.832 ************************************ 00:30:56.832 18:06:31 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:56.832 18:06:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:56.832 18:06:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:56.832 18:06:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:56.832 ************************************ 00:30:56.832 START TEST nvmf_host_multipath_status 00:30:56.832 ************************************ 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:56.832 * Looking for test storage... 00:30:56.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.832 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:57.092 18:06:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:58.994 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:58.994 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:58.994 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:58.994 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:58.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:58.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:30:58.994 00:30:58.994 --- 10.0.0.2 ping statistics --- 00:30:58.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.994 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:30:58.994 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:58.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:58.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:30:58.995 00:30:58.995 --- 10.0.0.1 ping statistics --- 00:30:58.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.995 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1070828 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1070828 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1070828 ']' 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:58.995 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:58.995 [2024-07-20 18:06:33.697582] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:58.995 [2024-07-20 18:06:33.697653] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.995 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.995 [2024-07-20 18:06:33.767966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:59.253 [2024-07-20 18:06:33.863179] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.253 [2024-07-20 18:06:33.863239] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.253 [2024-07-20 18:06:33.863260] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.253 [2024-07-20 18:06:33.863274] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.253 [2024-07-20 18:06:33.863286] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.253 [2024-07-20 18:06:33.865817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.253 [2024-07-20 18:06:33.865829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.253 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:59.253 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:30:59.253 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:59.253 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:59.253 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:59.253 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.253 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1070828 00:30:59.253 18:06:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:59.511 [2024-07-20 18:06:34.218645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.511 18:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:59.769 Malloc0 00:30:59.769 18:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:00.026 18:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:00.283 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:00.540 [2024-07-20 18:06:35.235959] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.540 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:00.798 [2024-07-20 18:06:35.484681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:00.798 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1071053 00:31:00.799 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:00.799 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:00.799 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1071053 /var/tmp/bdevperf.sock 00:31:00.799 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1071053 ']' 00:31:00.799 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:00.799 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:00.799 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:00.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:00.799 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:00.799 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:01.057 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:01.057 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:01.057 18:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:01.314 18:06:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:01.877 Nvme0n1 00:31:01.877 18:06:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:02.134 Nvme0n1 00:31:02.393 18:06:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:02.393 18:06:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:04.288 18:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:04.288 18:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:04.546 18:06:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:04.802 18:06:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:05.735 18:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:05.735 18:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:05.735 18:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.735 18:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:05.992 18:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.992 18:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:05.992 18:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.992 18:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:06.249 18:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:06.250 18:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:06.250 18:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.250 18:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:06.507 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.507 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:06.507 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.507 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:06.776 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.776 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:06.776 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.776 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:07.056 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.056 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:07.056 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.056 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:07.313 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.313 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:07.313 18:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:07.571 18:06:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:07.830 18:06:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:08.761 18:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:08.761 18:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:08.761 18:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.761 18:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:09.018 18:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:09.018 18:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:09.018 18:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.018 18:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:09.314 18:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.314 18:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:09.314 18:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.314 18:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:09.570 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.570 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:09.570 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.570 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:09.827 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.827 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:09.827 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.827 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:10.084 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:10.084 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:10.084 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.084 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:10.340 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:10.340 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:10.340 18:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:10.597 18:06:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:10.854 18:06:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:11.785 18:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:11.785 18:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:11.785 18:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.786 18:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:12.043 18:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.043 18:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:12.043 18:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.043 18:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:12.300 18:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:12.300 18:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:12.300 18:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.300 18:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:12.557 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.557 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:12.557 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.557 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:12.815 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.815 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:12.815 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.815 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:13.073 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.073 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:13.073 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.073 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:13.330 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.330 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:13.331 18:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:13.588 18:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:13.848 18:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:14.784 18:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:14.784 18:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:14.784 18:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.784 18:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:15.043 18:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.043 18:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:15.043 18:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.043 18:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:15.301 18:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:15.301 18:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:15.301 18:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.301 18:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:15.559 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.559 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:15.560 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.560 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:15.816 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.816 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:15.817 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.817 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:16.074 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.074 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:16.074 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.074 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:16.331 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:16.331 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:16.331 18:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:16.589 18:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:16.847 18:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:17.778 18:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:17.778 18:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:17.778 18:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.778 18:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:18.035 18:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:18.036 18:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:18.036 18:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.036 18:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:18.293 18:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:18.293 18:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:18.293 18:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.293 18:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:18.550 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.550 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:18.550 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.550 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:18.806 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.806 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:18.806 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.806 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:19.062 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:19.062 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:19.062 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.062 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:19.319 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:19.319 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:19.319 18:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:19.576 18:06:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:19.834 18:06:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:20.799 18:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:20.799 18:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:20.799 18:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.799 18:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:21.055 18:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:21.055 18:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:21.055 18:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.055 18:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:21.312 18:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.312 18:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:21.312 18:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.312 18:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:21.569 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.569 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:21.569 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.570 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:21.827 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.827 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:21.827 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.827 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:22.084 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:22.084 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:22.084 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.084 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:22.342 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.342 18:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:22.600 18:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:22.600 18:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:22.857 18:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:23.115 18:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:24.048 18:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:24.048 18:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:24.048 18:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.048 18:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:24.306 18:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.306 18:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:24.306 18:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.306 18:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:24.564 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.564 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:24.564 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.564 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:24.822 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.822 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:24.822 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.822 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:25.080 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.080 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:25.080 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.080 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:25.337 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.337 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:25.337 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.337 18:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:25.595 18:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.595 18:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:25.595 18:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:25.851 18:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:26.108 18:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:27.038 18:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:27.038 18:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:27.038 18:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.038 18:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:27.294 18:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:27.294 18:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:27.294 18:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.294 18:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:27.550 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.550 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:27.550 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.550 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:27.806 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.807 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:27.807 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.807 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:28.063 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.063 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:28.063 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.063 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:28.320 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.320 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:28.320 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.320 18:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:28.577 18:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.577 18:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:28.577 18:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:28.834 18:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:29.091 18:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:30.025 18:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:30.025 18:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:30.025 18:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.025 18:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:30.283 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.283 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:30.283 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.283 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:30.540 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.540 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:30.540 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.540 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:30.798 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.798 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:30.798 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.798 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:31.363 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.363 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:31.363 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.363 18:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:31.363 18:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.363 18:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:31.363 18:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.363 18:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:31.622 18:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.622 18:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:31.622 18:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:31.881 18:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:32.139 18:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:33.509 18:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:33.509 18:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:33.509 18:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.509 18:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:33.509 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.509 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:33.509 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.509 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:33.796 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:33.797 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:33.797 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.797 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:34.053 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.054 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:34.054 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.054 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:34.311 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.311 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:34.311 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.311 18:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:34.569 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.569 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:34.569 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.569 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:34.827 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:34.827 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1071053 00:31:34.827 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1071053 ']' 00:31:34.827 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1071053 00:31:34.827 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:34.827 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:34.827 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1071053 00:31:34.827 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:34.827 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:34.827 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1071053' 00:31:34.827 killing process with pid 1071053 00:31:34.827 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1071053 00:31:34.827 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1071053 00:31:34.827 Connection closed with partial response: 00:31:34.827 00:31:34.827 00:31:35.087 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1071053 00:31:35.087 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:35.087 [2024-07-20 18:06:35.546159] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:35.087 [2024-07-20 18:06:35.546250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1071053 ] 00:31:35.087 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.087 [2024-07-20 18:06:35.609650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.087 [2024-07-20 18:06:35.698558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.087 Running I/O for 90 seconds... 00:31:35.087 [2024-07-20 18:06:51.209166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.209224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.209281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.209301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.209325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.209342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.209364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.209380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.209402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.209417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.209438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.209454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.209476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.209492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.209512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.209529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.209551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.209568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.209789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.209837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.209867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.209900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.209926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.209943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.209967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.209984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.087 [2024-07-20 18:06:51.210606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.087 [2024-07-20 18:06:51.210627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.210642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.210663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.210693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.210716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.210732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.210754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.210784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.210818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.210836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.210870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.210900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.210925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.210942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.210970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.210988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.211011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.211028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.211051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.211068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.211107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.211124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.211147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.211180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.211204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.211236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.211259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.211276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.211297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.211313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.211336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.211352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.211374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.211389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.211411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.211427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.211464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.211480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.211505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.211521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.212597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.212620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.212650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.212682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.212712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.212729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.212755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.212786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.212840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.212858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.212886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.212903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.212931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.212948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.212975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.088 [2024-07-20 18:06:51.212992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:35.088 [2024-07-20 18:06:51.213622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.088 [2024-07-20 18:06:51.213638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.213665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:06:51.213681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.213708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:06:51.213729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.213756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.213787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.213837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.213865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.213895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.213912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.213940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.213957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.213986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.214004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.214049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.214110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.214183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:06:51.214227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:06:51.214272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:06:51.214315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:06:51.214359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:06:51.214408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:06:51.214451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:06:51.214496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:06:51.214541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.214585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.214628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.214671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.214715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.214759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.214832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.214882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:06:51.214911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:06:51.214929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.870771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:07:06.870840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.870908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:07:06.870931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.870955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:07:06.870974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.870997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:07:06.871015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.871038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:07:06.871056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.871096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.089 [2024-07-20 18:07:06.871113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.872395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:07:06.872423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.872474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:07:06.872493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.872531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:07:06.872550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.872573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:07:06.872590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.872613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:07:06.872630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.872653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:07:06.872670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.872693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:07:06.872724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.872748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:07:06.872765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.872997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.089 [2024-07-20 18:07:06.873023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:35.089 [2024-07-20 18:07:06.873048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.090 [2024-07-20 18:07:06.873066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:35.090 [2024-07-20 18:07:06.873088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.090 [2024-07-20 18:07:06.873121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:35.090 [2024-07-20 18:07:06.873145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.090 [2024-07-20 18:07:06.873161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:35.090 [2024-07-20 18:07:06.873203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.090 [2024-07-20 18:07:06.873219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:35.090 [2024-07-20 18:07:06.873241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.090 [2024-07-20 18:07:06.873272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:35.090 [2024-07-20 18:07:06.873294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.090 [2024-07-20 18:07:06.873309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:35.090 [2024-07-20 18:07:06.873330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.090 [2024-07-20 18:07:06.873346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:35.090 [2024-07-20 18:07:06.873366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.090 [2024-07-20 18:07:06.873382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:35.090 [2024-07-20 18:07:06.873403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.090 [2024-07-20 18:07:06.873419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:35.090 [2024-07-20 18:07:06.873704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.090 [2024-07-20 18:07:06.873733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:35.090 [2024-07-20 18:07:06.873761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.090 [2024-07-20 18:07:06.873804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:35.090 [2024-07-20 18:07:06.873832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.090 [2024-07-20 18:07:06.873850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:35.090 [2024-07-20 18:07:06.873873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.090 [2024-07-20 18:07:06.873891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:35.090 Received shutdown signal, test time was about 32.318007 seconds 00:31:35.090 00:31:35.090 Latency(us) 00:31:35.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.090 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:35.090 Verification LBA range: start 0x0 length 0x4000 00:31:35.090 Nvme0n1 : 32.32 7942.70 31.03 0.00 0.00 16088.50 582.54 4026531.84 00:31:35.090 =================================================================================================================== 00:31:35.090 Total : 7942.70 31.03 0.00 0.00 16088.50 582.54 4026531.84 00:31:35.090 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:35.348 rmmod nvme_tcp 00:31:35.348 rmmod nvme_fabrics 00:31:35.348 rmmod nvme_keyring 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1070828 ']' 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1070828 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1070828 ']' 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1070828 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1070828 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1070828' 00:31:35.348 killing process with pid 1070828 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1070828 00:31:35.348 18:07:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1070828 00:31:35.606 18:07:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:35.606 18:07:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:35.606 18:07:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:35.606 18:07:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:35.606 18:07:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:35.606 18:07:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.606 18:07:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.606 18:07:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.507 18:07:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:37.507 00:31:37.507 real 0m40.715s 00:31:37.507 user 2m2.598s 00:31:37.507 sys 0m10.766s 00:31:37.507 18:07:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:37.507 18:07:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:37.507 ************************************ 00:31:37.507 END TEST nvmf_host_multipath_status 00:31:37.507 ************************************ 00:31:37.507 18:07:12 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:37.766 18:07:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:37.766 18:07:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:37.766 18:07:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:37.766 ************************************ 00:31:37.766 START TEST nvmf_discovery_remove_ifc 00:31:37.766 ************************************ 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:37.766 * Looking for test storage... 00:31:37.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.766 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:37.767 18:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:39.668 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:39.669 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:39.669 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:39.669 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:39.669 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:39.669 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:39.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:31:39.928 00:31:39.928 --- 10.0.0.2 ping statistics --- 00:31:39.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.928 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:31:39.928 00:31:39.928 --- 10.0.0.1 ping statistics --- 00:31:39.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.928 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1077171 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1077171 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1077171 ']' 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:39.928 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:39.928 [2024-07-20 18:07:14.573325] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:39.928 [2024-07-20 18:07:14.573396] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.928 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.928 [2024-07-20 18:07:14.639840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.187 [2024-07-20 18:07:14.727605] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.187 [2024-07-20 18:07:14.727671] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.187 [2024-07-20 18:07:14.727685] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.187 [2024-07-20 18:07:14.727696] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.187 [2024-07-20 18:07:14.727706] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.187 [2024-07-20 18:07:14.727733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.187 [2024-07-20 18:07:14.881203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.187 [2024-07-20 18:07:14.889413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:40.187 null0 00:31:40.187 [2024-07-20 18:07:14.921332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1077208 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1077208 /tmp/host.sock 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1077208 ']' 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:40.187 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:40.187 18:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.446 [2024-07-20 18:07:14.990765] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:40.446 [2024-07-20 18:07:14.990868] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077208 ] 00:31:40.446 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.446 [2024-07-20 18:07:15.056029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.446 [2024-07-20 18:07:15.144521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.446 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:40.446 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:40.446 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:40.446 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:40.446 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.446 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.446 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.446 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:40.446 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.446 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.705 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.705 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:40.705 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.705 18:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:41.640 [2024-07-20 18:07:16.361207] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:41.640 [2024-07-20 18:07:16.361239] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:41.640 [2024-07-20 18:07:16.361264] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:41.898 [2024-07-20 18:07:16.488685] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:41.898 [2024-07-20 18:07:16.589656] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:41.898 [2024-07-20 18:07:16.589733] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:41.898 [2024-07-20 18:07:16.589779] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:41.898 [2024-07-20 18:07:16.589813] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:41.898 [2024-07-20 18:07:16.589854] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:41.898 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.898 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:41.898 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:41.898 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:41.898 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:41.898 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.898 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:41.898 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:41.898 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:41.898 [2024-07-20 18:07:16.598730] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2025900 was disconnected and freed. delete nvme_qpair. 00:31:41.898 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.899 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:41.899 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:41.899 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:41.899 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:41.899 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:41.899 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:41.899 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.899 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:41.899 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:41.899 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:41.899 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:42.158 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.158 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:42.158 18:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:43.092 18:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:43.092 18:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:43.092 18:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:43.092 18:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.092 18:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:43.092 18:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:43.092 18:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:43.092 18:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.092 18:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:43.092 18:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:44.026 18:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:44.026 18:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:44.026 18:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:44.026 18:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.026 18:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:44.026 18:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:44.026 18:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:44.026 18:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.026 18:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:44.026 18:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:45.398 18:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:45.398 18:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:45.398 18:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:45.398 18:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.398 18:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:45.398 18:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:45.398 18:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:45.398 18:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.398 18:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:45.398 18:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:46.327 18:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:46.327 18:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:46.327 18:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:46.327 18:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.327 18:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:46.327 18:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:46.327 18:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:46.327 18:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.327 18:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:46.327 18:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:47.256 18:07:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:47.256 18:07:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:47.256 18:07:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.256 18:07:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:47.256 18:07:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.256 18:07:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:47.256 18:07:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:47.256 18:07:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.256 18:07:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:47.256 18:07:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:47.256 [2024-07-20 18:07:22.030594] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:47.256 [2024-07-20 18:07:22.030671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.256 [2024-07-20 18:07:22.030695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.256 [2024-07-20 18:07:22.030714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.256 [2024-07-20 18:07:22.030730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.256 [2024-07-20 18:07:22.030746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.256 [2024-07-20 18:07:22.030761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.256 [2024-07-20 18:07:22.030777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.256 [2024-07-20 18:07:22.030801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.256 [2024-07-20 18:07:22.030820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.256 [2024-07-20 18:07:22.030849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.256 [2024-07-20 18:07:22.030870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec990 is same with the state(5) to be set 00:31:47.256 [2024-07-20 18:07:22.040614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fec990 (9): Bad file descriptor 00:31:47.256 [2024-07-20 18:07:22.050658] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:48.187 18:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:48.187 18:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:48.187 18:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.187 18:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:48.187 18:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.187 18:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:48.187 18:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:48.444 [2024-07-20 18:07:23.113834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:48.444 [2024-07-20 18:07:23.113890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fec990 with addr=10.0.0.2, port=4420 00:31:48.444 [2024-07-20 18:07:23.113914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec990 is same with the state(5) to be set 00:31:48.444 [2024-07-20 18:07:23.113952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fec990 (9): Bad file descriptor 00:31:48.444 [2024-07-20 18:07:23.114374] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:48.444 [2024-07-20 18:07:23.114412] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:48.444 [2024-07-20 18:07:23.114431] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:48.444 [2024-07-20 18:07:23.114449] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:48.444 [2024-07-20 18:07:23.114476] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.444 [2024-07-20 18:07:23.114496] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:48.444 18:07:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.444 18:07:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:48.444 18:07:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:49.410 [2024-07-20 18:07:24.117004] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:49.410 [2024-07-20 18:07:24.117068] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:49.410 [2024-07-20 18:07:24.117083] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:49.410 [2024-07-20 18:07:24.117098] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:49.410 [2024-07-20 18:07:24.117129] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.410 [2024-07-20 18:07:24.117182] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:49.410 [2024-07-20 18:07:24.117239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:49.410 [2024-07-20 18:07:24.117262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:49.410 [2024-07-20 18:07:24.117281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:49.410 [2024-07-20 18:07:24.117300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:49.410 [2024-07-20 18:07:24.117315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:49.410 [2024-07-20 18:07:24.117330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:49.410 [2024-07-20 18:07:24.117345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:49.410 [2024-07-20 18:07:24.117358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:49.410 [2024-07-20 18:07:24.117373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:49.410 [2024-07-20 18:07:24.117386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:49.410 [2024-07-20 18:07:24.117400] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:49.410 [2024-07-20 18:07:24.117535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1febde0 (9): Bad file descriptor 00:31:49.410 [2024-07-20 18:07:24.118547] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:49.410 [2024-07-20 18:07:24.118569] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:49.410 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:49.410 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.410 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:49.410 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.410 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:49.410 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:49.410 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:49.410 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.410 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:49.410 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.410 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.667 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:49.667 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:49.667 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.668 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:49.668 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.668 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:49.668 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:49.668 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:49.668 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.668 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:49.668 18:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:50.601 18:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:50.601 18:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:50.601 18:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:50.601 18:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:50.601 18:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.601 18:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.601 18:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:50.601 18:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.601 18:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:50.601 18:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:51.535 [2024-07-20 18:07:26.136505] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:51.535 [2024-07-20 18:07:26.136542] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:51.535 [2024-07-20 18:07:26.136563] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:51.535 [2024-07-20 18:07:26.266994] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:51.535 18:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:51.535 18:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.535 18:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:51.535 18:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.535 18:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:51.535 18:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:51.535 18:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:51.535 18:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.792 18:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:51.792 18:07:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:51.792 [2024-07-20 18:07:26.448445] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:51.792 [2024-07-20 18:07:26.448497] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:51.792 [2024-07-20 18:07:26.448529] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:51.793 [2024-07-20 18:07:26.448550] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:51.793 [2024-07-20 18:07:26.448564] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:51.793 [2024-07-20 18:07:26.454984] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1fd41d0 was disconnected and freed. delete nvme_qpair. 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1077208 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1077208 ']' 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1077208 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1077208 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1077208' 00:31:52.725 killing process with pid 1077208 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1077208 00:31:52.725 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1077208 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:52.983 rmmod nvme_tcp 00:31:52.983 rmmod nvme_fabrics 00:31:52.983 rmmod nvme_keyring 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1077171 ']' 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1077171 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1077171 ']' 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1077171 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1077171 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1077171' 00:31:52.983 killing process with pid 1077171 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1077171 00:31:52.983 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1077171 00:31:53.241 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:53.241 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:53.241 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:53.241 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:53.241 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:53.241 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.241 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:53.241 18:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.769 18:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:55.769 00:31:55.769 real 0m17.689s 00:31:55.769 user 0m25.651s 00:31:55.769 sys 0m3.012s 00:31:55.769 18:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:55.769 18:07:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.769 ************************************ 00:31:55.769 END TEST nvmf_discovery_remove_ifc 00:31:55.769 ************************************ 00:31:55.769 18:07:30 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:55.769 18:07:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:55.769 18:07:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:55.769 18:07:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:55.769 ************************************ 00:31:55.769 START TEST nvmf_identify_kernel_target 00:31:55.769 ************************************ 00:31:55.769 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:55.769 * Looking for test storage... 00:31:55.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:55.769 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:55.770 18:07:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:57.670 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:57.671 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:57.671 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:57.671 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:57.671 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:57.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:31:57.671 00:31:57.671 --- 10.0.0.2 ping statistics --- 00:31:57.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.671 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:57.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:31:57.671 00:31:57.671 --- 10.0.0.1 ping statistics --- 00:31:57.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.671 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:57.671 18:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:58.605 Waiting for block devices as requested 00:31:58.605 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:58.862 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:58.862 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:58.862 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:58.862 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:59.131 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:59.131 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:59.131 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:59.131 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:59.131 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:59.389 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:59.389 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:59.389 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:59.646 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:59.646 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:59.646 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:59.646 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:59.904 No valid GPT data, bailing 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:59.904 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:59.905 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:59.905 00:31:59.905 Discovery Log Number of Records 2, Generation counter 2 00:31:59.905 =====Discovery Log Entry 0====== 00:31:59.905 trtype: tcp 00:31:59.905 adrfam: ipv4 00:31:59.905 subtype: current discovery subsystem 00:31:59.905 treq: not specified, sq flow control disable supported 00:31:59.905 portid: 1 00:31:59.905 trsvcid: 4420 00:31:59.905 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:59.905 traddr: 10.0.0.1 00:31:59.905 eflags: none 00:31:59.905 sectype: none 00:31:59.905 =====Discovery Log Entry 1====== 00:31:59.905 trtype: tcp 00:31:59.905 adrfam: ipv4 00:31:59.905 subtype: nvme subsystem 00:31:59.905 treq: not specified, sq flow control disable supported 00:31:59.905 portid: 1 00:31:59.905 trsvcid: 4420 00:31:59.905 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:59.905 traddr: 10.0.0.1 00:31:59.905 eflags: none 00:31:59.905 sectype: none 00:31:59.905 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:59.905 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:59.905 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.905 ===================================================== 00:31:59.905 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:59.905 ===================================================== 00:31:59.905 Controller Capabilities/Features 00:31:59.905 ================================ 00:31:59.905 Vendor ID: 0000 00:31:59.905 Subsystem Vendor ID: 0000 00:31:59.905 Serial Number: 795a63547dbad05c1d5c 00:31:59.905 Model Number: Linux 00:31:59.905 Firmware Version: 6.7.0-68 00:31:59.905 Recommended Arb Burst: 0 00:31:59.905 IEEE OUI Identifier: 00 00 00 00:31:59.905 Multi-path I/O 00:31:59.905 May have multiple subsystem ports: No 00:31:59.905 May have multiple controllers: No 00:31:59.905 Associated with SR-IOV VF: No 00:31:59.905 Max Data Transfer Size: Unlimited 00:31:59.905 Max Number of Namespaces: 0 00:31:59.905 Max Number of I/O Queues: 1024 00:31:59.905 NVMe Specification Version (VS): 1.3 00:31:59.905 NVMe Specification Version (Identify): 1.3 00:31:59.905 Maximum Queue Entries: 1024 00:31:59.905 Contiguous Queues Required: No 00:31:59.905 Arbitration Mechanisms Supported 00:31:59.905 Weighted Round Robin: Not Supported 00:31:59.905 Vendor Specific: Not Supported 00:31:59.905 Reset Timeout: 7500 ms 00:31:59.905 Doorbell Stride: 4 bytes 00:31:59.905 NVM Subsystem Reset: Not Supported 00:31:59.905 Command Sets Supported 00:31:59.905 NVM Command Set: Supported 00:31:59.905 Boot Partition: Not Supported 00:31:59.905 Memory Page Size Minimum: 4096 bytes 00:31:59.905 Memory Page Size Maximum: 4096 bytes 00:31:59.905 Persistent Memory Region: Not Supported 00:31:59.905 Optional Asynchronous Events Supported 00:31:59.905 Namespace Attribute Notices: Not Supported 00:31:59.905 Firmware Activation Notices: Not Supported 00:31:59.905 ANA Change Notices: Not Supported 00:31:59.905 PLE Aggregate Log Change Notices: Not Supported 00:31:59.905 LBA Status Info Alert Notices: Not Supported 00:31:59.905 EGE Aggregate Log Change Notices: Not Supported 00:31:59.905 Normal NVM Subsystem Shutdown event: Not Supported 00:31:59.905 Zone Descriptor Change Notices: Not Supported 00:31:59.905 Discovery Log Change Notices: Supported 00:31:59.905 Controller Attributes 00:31:59.905 128-bit Host Identifier: Not Supported 00:31:59.905 Non-Operational Permissive Mode: Not Supported 00:31:59.905 NVM Sets: Not Supported 00:31:59.905 Read Recovery Levels: Not Supported 00:31:59.905 Endurance Groups: Not Supported 00:31:59.905 Predictable Latency Mode: Not Supported 00:31:59.905 Traffic Based Keep ALive: Not Supported 00:31:59.905 Namespace Granularity: Not Supported 00:31:59.905 SQ Associations: Not Supported 00:31:59.905 UUID List: Not Supported 00:31:59.905 Multi-Domain Subsystem: Not Supported 00:31:59.905 Fixed Capacity Management: Not Supported 00:31:59.905 Variable Capacity Management: Not Supported 00:31:59.905 Delete Endurance Group: Not Supported 00:31:59.905 Delete NVM Set: Not Supported 00:31:59.905 Extended LBA Formats Supported: Not Supported 00:31:59.905 Flexible Data Placement Supported: Not Supported 00:31:59.905 00:31:59.905 Controller Memory Buffer Support 00:31:59.905 ================================ 00:31:59.905 Supported: No 00:31:59.905 00:31:59.905 Persistent Memory Region Support 00:31:59.905 ================================ 00:31:59.905 Supported: No 00:31:59.905 00:31:59.905 Admin Command Set Attributes 00:31:59.905 ============================ 00:31:59.905 Security Send/Receive: Not Supported 00:31:59.905 Format NVM: Not Supported 00:31:59.905 Firmware Activate/Download: Not Supported 00:31:59.905 Namespace Management: Not Supported 00:31:59.905 Device Self-Test: Not Supported 00:31:59.905 Directives: Not Supported 00:31:59.905 NVMe-MI: Not Supported 00:31:59.905 Virtualization Management: Not Supported 00:31:59.905 Doorbell Buffer Config: Not Supported 00:31:59.905 Get LBA Status Capability: Not Supported 00:31:59.905 Command & Feature Lockdown Capability: Not Supported 00:31:59.905 Abort Command Limit: 1 00:31:59.905 Async Event Request Limit: 1 00:31:59.905 Number of Firmware Slots: N/A 00:31:59.905 Firmware Slot 1 Read-Only: N/A 00:31:59.905 Firmware Activation Without Reset: N/A 00:31:59.905 Multiple Update Detection Support: N/A 00:31:59.905 Firmware Update Granularity: No Information Provided 00:31:59.905 Per-Namespace SMART Log: No 00:31:59.905 Asymmetric Namespace Access Log Page: Not Supported 00:31:59.905 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:59.905 Command Effects Log Page: Not Supported 00:31:59.905 Get Log Page Extended Data: Supported 00:31:59.905 Telemetry Log Pages: Not Supported 00:31:59.905 Persistent Event Log Pages: Not Supported 00:31:59.905 Supported Log Pages Log Page: May Support 00:31:59.905 Commands Supported & Effects Log Page: Not Supported 00:31:59.905 Feature Identifiers & Effects Log Page:May Support 00:31:59.905 NVMe-MI Commands & Effects Log Page: May Support 00:31:59.905 Data Area 4 for Telemetry Log: Not Supported 00:31:59.905 Error Log Page Entries Supported: 1 00:31:59.905 Keep Alive: Not Supported 00:31:59.905 00:31:59.905 NVM Command Set Attributes 00:31:59.905 ========================== 00:31:59.905 Submission Queue Entry Size 00:31:59.905 Max: 1 00:31:59.905 Min: 1 00:31:59.905 Completion Queue Entry Size 00:31:59.905 Max: 1 00:31:59.905 Min: 1 00:31:59.905 Number of Namespaces: 0 00:31:59.905 Compare Command: Not Supported 00:31:59.905 Write Uncorrectable Command: Not Supported 00:31:59.905 Dataset Management Command: Not Supported 00:31:59.905 Write Zeroes Command: Not Supported 00:31:59.905 Set Features Save Field: Not Supported 00:31:59.905 Reservations: Not Supported 00:31:59.905 Timestamp: Not Supported 00:31:59.905 Copy: Not Supported 00:31:59.905 Volatile Write Cache: Not Present 00:31:59.905 Atomic Write Unit (Normal): 1 00:31:59.905 Atomic Write Unit (PFail): 1 00:31:59.905 Atomic Compare & Write Unit: 1 00:31:59.905 Fused Compare & Write: Not Supported 00:31:59.905 Scatter-Gather List 00:31:59.905 SGL Command Set: Supported 00:31:59.905 SGL Keyed: Not Supported 00:31:59.905 SGL Bit Bucket Descriptor: Not Supported 00:31:59.905 SGL Metadata Pointer: Not Supported 00:31:59.905 Oversized SGL: Not Supported 00:31:59.905 SGL Metadata Address: Not Supported 00:31:59.905 SGL Offset: Supported 00:31:59.905 Transport SGL Data Block: Not Supported 00:31:59.905 Replay Protected Memory Block: Not Supported 00:31:59.905 00:31:59.905 Firmware Slot Information 00:31:59.905 ========================= 00:31:59.905 Active slot: 0 00:31:59.905 00:31:59.905 00:31:59.905 Error Log 00:31:59.905 ========= 00:31:59.905 00:31:59.905 Active Namespaces 00:31:59.905 ================= 00:31:59.905 Discovery Log Page 00:31:59.905 ================== 00:31:59.905 Generation Counter: 2 00:31:59.905 Number of Records: 2 00:31:59.905 Record Format: 0 00:31:59.905 00:31:59.905 Discovery Log Entry 0 00:31:59.905 ---------------------- 00:31:59.905 Transport Type: 3 (TCP) 00:31:59.905 Address Family: 1 (IPv4) 00:31:59.905 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:59.905 Entry Flags: 00:31:59.905 Duplicate Returned Information: 0 00:31:59.905 Explicit Persistent Connection Support for Discovery: 0 00:31:59.905 Transport Requirements: 00:31:59.905 Secure Channel: Not Specified 00:31:59.905 Port ID: 1 (0x0001) 00:31:59.905 Controller ID: 65535 (0xffff) 00:31:59.905 Admin Max SQ Size: 32 00:31:59.905 Transport Service Identifier: 4420 00:31:59.905 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:59.905 Transport Address: 10.0.0.1 00:31:59.905 Discovery Log Entry 1 00:31:59.905 ---------------------- 00:31:59.905 Transport Type: 3 (TCP) 00:31:59.905 Address Family: 1 (IPv4) 00:31:59.905 Subsystem Type: 2 (NVM Subsystem) 00:31:59.905 Entry Flags: 00:31:59.905 Duplicate Returned Information: 0 00:31:59.905 Explicit Persistent Connection Support for Discovery: 0 00:31:59.905 Transport Requirements: 00:31:59.905 Secure Channel: Not Specified 00:31:59.906 Port ID: 1 (0x0001) 00:31:59.906 Controller ID: 65535 (0xffff) 00:31:59.906 Admin Max SQ Size: 32 00:31:59.906 Transport Service Identifier: 4420 00:31:59.906 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:59.906 Transport Address: 10.0.0.1 00:31:59.906 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:59.906 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.906 get_feature(0x01) failed 00:31:59.906 get_feature(0x02) failed 00:31:59.906 get_feature(0x04) failed 00:31:59.906 ===================================================== 00:31:59.906 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:59.906 ===================================================== 00:31:59.906 Controller Capabilities/Features 00:31:59.906 ================================ 00:31:59.906 Vendor ID: 0000 00:31:59.906 Subsystem Vendor ID: 0000 00:31:59.906 Serial Number: 989b555945cd5d0c7cce 00:31:59.906 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:59.906 Firmware Version: 6.7.0-68 00:31:59.906 Recommended Arb Burst: 6 00:31:59.906 IEEE OUI Identifier: 00 00 00 00:31:59.906 Multi-path I/O 00:31:59.906 May have multiple subsystem ports: Yes 00:31:59.906 May have multiple controllers: Yes 00:31:59.906 Associated with SR-IOV VF: No 00:31:59.906 Max Data Transfer Size: Unlimited 00:31:59.906 Max Number of Namespaces: 1024 00:31:59.906 Max Number of I/O Queues: 128 00:31:59.906 NVMe Specification Version (VS): 1.3 00:31:59.906 NVMe Specification Version (Identify): 1.3 00:31:59.906 Maximum Queue Entries: 1024 00:31:59.906 Contiguous Queues Required: No 00:31:59.906 Arbitration Mechanisms Supported 00:31:59.906 Weighted Round Robin: Not Supported 00:31:59.906 Vendor Specific: Not Supported 00:31:59.906 Reset Timeout: 7500 ms 00:31:59.906 Doorbell Stride: 4 bytes 00:31:59.906 NVM Subsystem Reset: Not Supported 00:31:59.906 Command Sets Supported 00:31:59.906 NVM Command Set: Supported 00:31:59.906 Boot Partition: Not Supported 00:31:59.906 Memory Page Size Minimum: 4096 bytes 00:31:59.906 Memory Page Size Maximum: 4096 bytes 00:31:59.906 Persistent Memory Region: Not Supported 00:31:59.906 Optional Asynchronous Events Supported 00:31:59.906 Namespace Attribute Notices: Supported 00:31:59.906 Firmware Activation Notices: Not Supported 00:31:59.906 ANA Change Notices: Supported 00:31:59.906 PLE Aggregate Log Change Notices: Not Supported 00:31:59.906 LBA Status Info Alert Notices: Not Supported 00:31:59.906 EGE Aggregate Log Change Notices: Not Supported 00:31:59.906 Normal NVM Subsystem Shutdown event: Not Supported 00:31:59.906 Zone Descriptor Change Notices: Not Supported 00:31:59.906 Discovery Log Change Notices: Not Supported 00:31:59.906 Controller Attributes 00:31:59.906 128-bit Host Identifier: Supported 00:31:59.906 Non-Operational Permissive Mode: Not Supported 00:31:59.906 NVM Sets: Not Supported 00:31:59.906 Read Recovery Levels: Not Supported 00:31:59.906 Endurance Groups: Not Supported 00:31:59.906 Predictable Latency Mode: Not Supported 00:31:59.906 Traffic Based Keep ALive: Supported 00:31:59.906 Namespace Granularity: Not Supported 00:31:59.906 SQ Associations: Not Supported 00:31:59.906 UUID List: Not Supported 00:31:59.906 Multi-Domain Subsystem: Not Supported 00:31:59.906 Fixed Capacity Management: Not Supported 00:31:59.906 Variable Capacity Management: Not Supported 00:31:59.906 Delete Endurance Group: Not Supported 00:31:59.906 Delete NVM Set: Not Supported 00:31:59.906 Extended LBA Formats Supported: Not Supported 00:31:59.906 Flexible Data Placement Supported: Not Supported 00:31:59.906 00:31:59.906 Controller Memory Buffer Support 00:31:59.906 ================================ 00:31:59.906 Supported: No 00:31:59.906 00:31:59.906 Persistent Memory Region Support 00:31:59.906 ================================ 00:31:59.906 Supported: No 00:31:59.906 00:31:59.906 Admin Command Set Attributes 00:31:59.906 ============================ 00:31:59.906 Security Send/Receive: Not Supported 00:31:59.906 Format NVM: Not Supported 00:31:59.906 Firmware Activate/Download: Not Supported 00:31:59.906 Namespace Management: Not Supported 00:31:59.906 Device Self-Test: Not Supported 00:31:59.906 Directives: Not Supported 00:31:59.906 NVMe-MI: Not Supported 00:31:59.906 Virtualization Management: Not Supported 00:31:59.906 Doorbell Buffer Config: Not Supported 00:31:59.906 Get LBA Status Capability: Not Supported 00:31:59.906 Command & Feature Lockdown Capability: Not Supported 00:31:59.906 Abort Command Limit: 4 00:31:59.906 Async Event Request Limit: 4 00:31:59.906 Number of Firmware Slots: N/A 00:31:59.906 Firmware Slot 1 Read-Only: N/A 00:31:59.906 Firmware Activation Without Reset: N/A 00:31:59.906 Multiple Update Detection Support: N/A 00:31:59.906 Firmware Update Granularity: No Information Provided 00:31:59.906 Per-Namespace SMART Log: Yes 00:31:59.906 Asymmetric Namespace Access Log Page: Supported 00:31:59.906 ANA Transition Time : 10 sec 00:31:59.906 00:31:59.906 Asymmetric Namespace Access Capabilities 00:31:59.906 ANA Optimized State : Supported 00:31:59.906 ANA Non-Optimized State : Supported 00:31:59.906 ANA Inaccessible State : Supported 00:31:59.906 ANA Persistent Loss State : Supported 00:31:59.906 ANA Change State : Supported 00:31:59.906 ANAGRPID is not changed : No 00:31:59.906 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:59.906 00:31:59.906 ANA Group Identifier Maximum : 128 00:31:59.906 Number of ANA Group Identifiers : 128 00:31:59.906 Max Number of Allowed Namespaces : 1024 00:31:59.906 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:59.906 Command Effects Log Page: Supported 00:31:59.906 Get Log Page Extended Data: Supported 00:31:59.906 Telemetry Log Pages: Not Supported 00:31:59.906 Persistent Event Log Pages: Not Supported 00:31:59.906 Supported Log Pages Log Page: May Support 00:31:59.906 Commands Supported & Effects Log Page: Not Supported 00:31:59.906 Feature Identifiers & Effects Log Page:May Support 00:31:59.906 NVMe-MI Commands & Effects Log Page: May Support 00:31:59.906 Data Area 4 for Telemetry Log: Not Supported 00:31:59.906 Error Log Page Entries Supported: 128 00:31:59.906 Keep Alive: Supported 00:31:59.906 Keep Alive Granularity: 1000 ms 00:31:59.906 00:31:59.906 NVM Command Set Attributes 00:31:59.906 ========================== 00:31:59.906 Submission Queue Entry Size 00:31:59.906 Max: 64 00:31:59.906 Min: 64 00:31:59.906 Completion Queue Entry Size 00:31:59.906 Max: 16 00:31:59.906 Min: 16 00:31:59.906 Number of Namespaces: 1024 00:31:59.906 Compare Command: Not Supported 00:31:59.906 Write Uncorrectable Command: Not Supported 00:31:59.906 Dataset Management Command: Supported 00:31:59.906 Write Zeroes Command: Supported 00:31:59.906 Set Features Save Field: Not Supported 00:31:59.906 Reservations: Not Supported 00:31:59.906 Timestamp: Not Supported 00:31:59.906 Copy: Not Supported 00:31:59.906 Volatile Write Cache: Present 00:31:59.906 Atomic Write Unit (Normal): 1 00:31:59.906 Atomic Write Unit (PFail): 1 00:31:59.906 Atomic Compare & Write Unit: 1 00:31:59.906 Fused Compare & Write: Not Supported 00:31:59.906 Scatter-Gather List 00:31:59.906 SGL Command Set: Supported 00:31:59.906 SGL Keyed: Not Supported 00:31:59.906 SGL Bit Bucket Descriptor: Not Supported 00:31:59.906 SGL Metadata Pointer: Not Supported 00:31:59.906 Oversized SGL: Not Supported 00:31:59.906 SGL Metadata Address: Not Supported 00:31:59.906 SGL Offset: Supported 00:31:59.906 Transport SGL Data Block: Not Supported 00:31:59.906 Replay Protected Memory Block: Not Supported 00:31:59.906 00:31:59.906 Firmware Slot Information 00:31:59.906 ========================= 00:31:59.906 Active slot: 0 00:31:59.906 00:31:59.906 Asymmetric Namespace Access 00:31:59.906 =========================== 00:31:59.906 Change Count : 0 00:31:59.906 Number of ANA Group Descriptors : 1 00:31:59.906 ANA Group Descriptor : 0 00:31:59.906 ANA Group ID : 1 00:31:59.906 Number of NSID Values : 1 00:31:59.906 Change Count : 0 00:31:59.906 ANA State : 1 00:31:59.906 Namespace Identifier : 1 00:31:59.906 00:31:59.906 Commands Supported and Effects 00:31:59.906 ============================== 00:31:59.906 Admin Commands 00:31:59.906 -------------- 00:31:59.906 Get Log Page (02h): Supported 00:31:59.906 Identify (06h): Supported 00:31:59.906 Abort (08h): Supported 00:31:59.906 Set Features (09h): Supported 00:31:59.906 Get Features (0Ah): Supported 00:31:59.906 Asynchronous Event Request (0Ch): Supported 00:31:59.906 Keep Alive (18h): Supported 00:31:59.906 I/O Commands 00:31:59.906 ------------ 00:31:59.906 Flush (00h): Supported 00:31:59.906 Write (01h): Supported LBA-Change 00:31:59.906 Read (02h): Supported 00:31:59.906 Write Zeroes (08h): Supported LBA-Change 00:31:59.906 Dataset Management (09h): Supported 00:31:59.906 00:31:59.906 Error Log 00:31:59.906 ========= 00:31:59.906 Entry: 0 00:31:59.906 Error Count: 0x3 00:31:59.906 Submission Queue Id: 0x0 00:31:59.906 Command Id: 0x5 00:31:59.906 Phase Bit: 0 00:31:59.906 Status Code: 0x2 00:31:59.906 Status Code Type: 0x0 00:31:59.906 Do Not Retry: 1 00:31:59.906 Error Location: 0x28 00:31:59.907 LBA: 0x0 00:31:59.907 Namespace: 0x0 00:31:59.907 Vendor Log Page: 0x0 00:31:59.907 ----------- 00:31:59.907 Entry: 1 00:31:59.907 Error Count: 0x2 00:31:59.907 Submission Queue Id: 0x0 00:31:59.907 Command Id: 0x5 00:31:59.907 Phase Bit: 0 00:31:59.907 Status Code: 0x2 00:31:59.907 Status Code Type: 0x0 00:31:59.907 Do Not Retry: 1 00:31:59.907 Error Location: 0x28 00:31:59.907 LBA: 0x0 00:31:59.907 Namespace: 0x0 00:31:59.907 Vendor Log Page: 0x0 00:31:59.907 ----------- 00:31:59.907 Entry: 2 00:31:59.907 Error Count: 0x1 00:31:59.907 Submission Queue Id: 0x0 00:31:59.907 Command Id: 0x4 00:31:59.907 Phase Bit: 0 00:31:59.907 Status Code: 0x2 00:31:59.907 Status Code Type: 0x0 00:31:59.907 Do Not Retry: 1 00:31:59.907 Error Location: 0x28 00:31:59.907 LBA: 0x0 00:31:59.907 Namespace: 0x0 00:31:59.907 Vendor Log Page: 0x0 00:31:59.907 00:31:59.907 Number of Queues 00:31:59.907 ================ 00:31:59.907 Number of I/O Submission Queues: 128 00:31:59.907 Number of I/O Completion Queues: 128 00:31:59.907 00:31:59.907 ZNS Specific Controller Data 00:31:59.907 ============================ 00:31:59.907 Zone Append Size Limit: 0 00:31:59.907 00:31:59.907 00:31:59.907 Active Namespaces 00:31:59.907 ================= 00:31:59.907 get_feature(0x05) failed 00:31:59.907 Namespace ID:1 00:31:59.907 Command Set Identifier: NVM (00h) 00:31:59.907 Deallocate: Supported 00:31:59.907 Deallocated/Unwritten Error: Not Supported 00:31:59.907 Deallocated Read Value: Unknown 00:31:59.907 Deallocate in Write Zeroes: Not Supported 00:31:59.907 Deallocated Guard Field: 0xFFFF 00:31:59.907 Flush: Supported 00:31:59.907 Reservation: Not Supported 00:31:59.907 Namespace Sharing Capabilities: Multiple Controllers 00:31:59.907 Size (in LBAs): 1953525168 (931GiB) 00:31:59.907 Capacity (in LBAs): 1953525168 (931GiB) 00:31:59.907 Utilization (in LBAs): 1953525168 (931GiB) 00:31:59.907 UUID: 94c34bac-e3aa-4a63-a6c7-9c295260ae97 00:31:59.907 Thin Provisioning: Not Supported 00:31:59.907 Per-NS Atomic Units: Yes 00:31:59.907 Atomic Boundary Size (Normal): 0 00:31:59.907 Atomic Boundary Size (PFail): 0 00:31:59.907 Atomic Boundary Offset: 0 00:31:59.907 NGUID/EUI64 Never Reused: No 00:31:59.907 ANA group ID: 1 00:31:59.907 Namespace Write Protected: No 00:31:59.907 Number of LBA Formats: 1 00:31:59.907 Current LBA Format: LBA Format #00 00:31:59.907 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:59.907 00:31:59.907 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:59.907 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:59.907 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:59.907 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:59.907 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:59.907 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:59.907 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:00.164 rmmod nvme_tcp 00:32:00.164 rmmod nvme_fabrics 00:32:00.164 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:00.164 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:00.164 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:00.164 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:00.164 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:00.164 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:00.164 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:00.164 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:00.164 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:00.164 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.164 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.164 18:07:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.063 18:07:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:02.063 18:07:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:02.063 18:07:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:02.063 18:07:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:02.063 18:07:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:02.063 18:07:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:02.063 18:07:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:02.063 18:07:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:02.063 18:07:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:02.063 18:07:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:02.063 18:07:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:03.437 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:03.437 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:03.437 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:03.437 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:03.437 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:03.437 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:03.437 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:03.437 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:03.437 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:03.437 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:03.437 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:03.437 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:03.437 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:03.437 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:03.437 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:03.437 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:04.372 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:04.372 00:32:04.372 real 0m8.958s 00:32:04.372 user 0m1.878s 00:32:04.372 sys 0m3.159s 00:32:04.372 18:07:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:04.372 18:07:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:04.372 ************************************ 00:32:04.372 END TEST nvmf_identify_kernel_target 00:32:04.372 ************************************ 00:32:04.372 18:07:39 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:04.372 18:07:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:04.372 18:07:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:04.372 18:07:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:04.372 ************************************ 00:32:04.372 START TEST nvmf_auth_host 00:32:04.372 ************************************ 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:04.372 * Looking for test storage... 00:32:04.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:04.372 18:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:04.373 18:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:06.301 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:06.301 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:06.301 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:06.301 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:06.301 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:06.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:32:06.560 00:32:06.560 --- 10.0.0.2 ping statistics --- 00:32:06.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.560 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:32:06.560 00:32:06.560 --- 10.0.0.1 ping statistics --- 00:32:06.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.560 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1084326 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1084326 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1084326 ']' 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:06.560 18:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9a84a0929e42807b19b9e2b1acca91eb 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Buy 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9a84a0929e42807b19b9e2b1acca91eb 0 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9a84a0929e42807b19b9e2b1acca91eb 0 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9a84a0929e42807b19b9e2b1acca91eb 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:06.819 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Buy 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Buy 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Buy 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e9a9e54204a4b0331f476ae2a95a58b38835fc63fc7f9fcbacc1eae88a227f6f 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7PV 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e9a9e54204a4b0331f476ae2a95a58b38835fc63fc7f9fcbacc1eae88a227f6f 3 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e9a9e54204a4b0331f476ae2a95a58b38835fc63fc7f9fcbacc1eae88a227f6f 3 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e9a9e54204a4b0331f476ae2a95a58b38835fc63fc7f9fcbacc1eae88a227f6f 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7PV 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7PV 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7PV 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fe5ff177fd1abc4e95fbcfdea9a83f55b8edb16fbe719eaa 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aki 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fe5ff177fd1abc4e95fbcfdea9a83f55b8edb16fbe719eaa 0 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fe5ff177fd1abc4e95fbcfdea9a83f55b8edb16fbe719eaa 0 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fe5ff177fd1abc4e95fbcfdea9a83f55b8edb16fbe719eaa 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aki 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aki 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.aki 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cc48fe2cf792a610b3bdead14dfeebd82537064f22d8560b 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IKj 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cc48fe2cf792a610b3bdead14dfeebd82537064f22d8560b 2 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cc48fe2cf792a610b3bdead14dfeebd82537064f22d8560b 2 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cc48fe2cf792a610b3bdead14dfeebd82537064f22d8560b 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IKj 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IKj 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.IKj 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=df9bb49c148b16f2e3bbb3ff622661bf 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:07.078 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.O6w 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key df9bb49c148b16f2e3bbb3ff622661bf 1 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 df9bb49c148b16f2e3bbb3ff622661bf 1 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=df9bb49c148b16f2e3bbb3ff622661bf 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.O6w 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.O6w 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.O6w 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=21edc048284aa25219f336788f5e5fbc 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.yYE 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 21edc048284aa25219f336788f5e5fbc 1 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 21edc048284aa25219f336788f5e5fbc 1 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=21edc048284aa25219f336788f5e5fbc 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.yYE 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.yYE 00:32:07.079 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.yYE 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=90cd06b59ae02fb704f2011ec5103d1a759d48f852be3a17 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.SAk 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 90cd06b59ae02fb704f2011ec5103d1a759d48f852be3a17 2 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 90cd06b59ae02fb704f2011ec5103d1a759d48f852be3a17 2 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=90cd06b59ae02fb704f2011ec5103d1a759d48f852be3a17 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.SAk 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.SAk 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.SAk 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7b3c0c8a827a594ebb20cae1c02decb6 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.U49 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7b3c0c8a827a594ebb20cae1c02decb6 0 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7b3c0c8a827a594ebb20cae1c02decb6 0 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7b3c0c8a827a594ebb20cae1c02decb6 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.U49 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.U49 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.U49 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=923f5ee8143ff9d2731f8f8b38861818be3bcdd804a348d18d55ee4ef2409824 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tDI 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 923f5ee8143ff9d2731f8f8b38861818be3bcdd804a348d18d55ee4ef2409824 3 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 923f5ee8143ff9d2731f8f8b38861818be3bcdd804a348d18d55ee4ef2409824 3 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=923f5ee8143ff9d2731f8f8b38861818be3bcdd804a348d18d55ee4ef2409824 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:07.337 18:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.337 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tDI 00:32:07.337 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tDI 00:32:07.337 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.tDI 00:32:07.337 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:07.337 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1084326 00:32:07.337 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1084326 ']' 00:32:07.337 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.337 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:07.337 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.338 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:07.338 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.595 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:07.595 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:07.595 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:07.595 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Buy 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7PV ]] 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7PV 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.aki 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.IKj ]] 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IKj 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.O6w 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.yYE ]] 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yYE 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.SAk 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.596 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.U49 ]] 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.U49 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.tDI 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:07.853 18:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:08.784 Waiting for block devices as requested 00:32:08.784 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:08.784 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:08.784 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:09.041 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:09.041 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:09.041 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:09.041 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:09.309 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:09.309 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:09.309 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:09.309 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:09.566 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:09.566 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:09.566 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:09.566 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:09.823 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:09.823 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:10.080 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:10.080 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:10.080 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:10.080 18:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:10.080 18:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:10.080 18:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:10.080 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:10.080 18:07:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:10.080 18:07:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:10.338 No valid GPT data, bailing 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:10.338 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:10.338 00:32:10.338 Discovery Log Number of Records 2, Generation counter 2 00:32:10.338 =====Discovery Log Entry 0====== 00:32:10.338 trtype: tcp 00:32:10.338 adrfam: ipv4 00:32:10.338 subtype: current discovery subsystem 00:32:10.338 treq: not specified, sq flow control disable supported 00:32:10.338 portid: 1 00:32:10.338 trsvcid: 4420 00:32:10.339 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:10.339 traddr: 10.0.0.1 00:32:10.339 eflags: none 00:32:10.339 sectype: none 00:32:10.339 =====Discovery Log Entry 1====== 00:32:10.339 trtype: tcp 00:32:10.339 adrfam: ipv4 00:32:10.339 subtype: nvme subsystem 00:32:10.339 treq: not specified, sq flow control disable supported 00:32:10.339 portid: 1 00:32:10.339 trsvcid: 4420 00:32:10.339 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:10.339 traddr: 10.0.0.1 00:32:10.339 eflags: none 00:32:10.339 sectype: none 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.339 18:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.339 nvme0n1 00:32:10.339 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.339 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.339 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.339 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.339 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.339 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.596 nvme0n1 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.596 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.597 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.854 nvme0n1 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.854 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.855 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.113 nvme0n1 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.113 nvme0n1 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.113 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.371 18:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.371 nvme0n1 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.371 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.635 nvme0n1 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.635 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.636 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.894 nvme0n1 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.894 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.152 nvme0n1 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.152 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.411 nvme0n1 00:32:12.411 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.411 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.411 18:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.411 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.411 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.411 18:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.411 nvme0n1 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.411 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.669 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.927 nvme0n1 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.927 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.928 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:12.928 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.928 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.185 nvme0n1 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:13.185 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.186 18:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.443 nvme0n1 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.443 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.444 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.444 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.444 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.444 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.444 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.444 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:13.444 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.444 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.701 nvme0n1 00:32:13.701 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.701 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.701 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.701 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.701 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.701 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.701 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.701 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.701 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.701 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.958 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.214 nvme0n1 00:32:14.214 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.214 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.214 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.214 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.214 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.214 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.214 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.214 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.214 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.215 18:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.780 nvme0n1 00:32:14.780 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.780 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.780 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.780 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.780 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.780 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.780 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.780 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.780 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.780 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.780 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.781 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.038 nvme0n1 00:32:15.038 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.038 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.038 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.038 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.038 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.296 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.297 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.297 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.297 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.297 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.297 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.297 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.297 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.297 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.297 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.297 18:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.297 18:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:15.297 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.297 18:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.860 nvme0n1 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.860 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.425 nvme0n1 00:32:16.425 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.425 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.425 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.425 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.425 18:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.425 18:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.425 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.426 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.990 nvme0n1 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.990 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.991 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.991 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.991 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.991 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.991 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.991 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.991 18:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.991 18:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:16.991 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.991 18:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.924 nvme0n1 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.924 18:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.857 nvme0n1 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.857 18:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.789 nvme0n1 00:32:19.789 18:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.789 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.789 18:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.789 18:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.789 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.789 18:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.789 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.789 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.789 18:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.789 18:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.046 18:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.977 nvme0n1 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.977 18:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.922 nvme0n1 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.922 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.923 nvme0n1 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.923 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.179 nvme0n1 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.179 18:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.436 nvme0n1 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:22.436 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.437 nvme0n1 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.437 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.694 nvme0n1 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.694 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.695 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.952 nvme0n1 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.952 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.953 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.953 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.953 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.953 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:22.953 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.953 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.210 nvme0n1 00:32:23.210 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.210 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.210 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.210 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.210 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.211 18:07:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.468 nvme0n1 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:23.468 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.469 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.727 nvme0n1 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.727 nvme0n1 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.727 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.985 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.986 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:23.986 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.986 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.243 nvme0n1 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.243 18:07:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.500 nvme0n1 00:32:24.500 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.500 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.500 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.500 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.500 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.500 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.500 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.500 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.500 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.501 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.758 nvme0n1 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:24.758 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.016 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.274 nvme0n1 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:25.274 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.275 18:07:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.532 nvme0n1 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.532 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.533 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.098 nvme0n1 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.099 18:08:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.664 nvme0n1 00:32:26.664 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.665 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.231 nvme0n1 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.231 18:08:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.803 nvme0n1 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.803 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.369 nvme0n1 00:32:28.369 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.369 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.369 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.369 18:08:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.369 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.369 18:08:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.369 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.369 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.369 18:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.369 18:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.369 18:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.369 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:28.369 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.369 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.370 18:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.301 nvme0n1 00:32:29.301 18:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.301 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.301 18:08:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.301 18:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.301 18:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.301 18:08:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.301 18:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.230 nvme0n1 00:32:30.230 18:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.230 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.230 18:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.230 18:08:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.230 18:08:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.230 18:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.487 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.487 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.488 18:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.451 nvme0n1 00:32:31.451 18:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.451 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.451 18:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.451 18:08:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.451 18:08:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.451 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.386 nvme0n1 00:32:32.386 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.386 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.386 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.386 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.386 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.386 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.386 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.387 18:08:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.320 nvme0n1 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.320 18:08:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.320 nvme0n1 00:32:33.320 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.320 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.320 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.320 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.320 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.320 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.578 nvme0n1 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:33.578 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.579 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.836 nvme0n1 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.836 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.094 nvme0n1 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.094 nvme0n1 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.094 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.352 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.352 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.352 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.352 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.352 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.353 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.353 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.353 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.353 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.353 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.353 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.353 18:08:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.353 18:08:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.353 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.353 18:08:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.353 nvme0n1 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.353 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.610 nvme0n1 00:32:34.610 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.610 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.610 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.610 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.610 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.610 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.610 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.610 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.611 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.868 nvme0n1 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.868 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.869 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.869 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.869 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.869 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.869 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.127 nvme0n1 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.127 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.385 nvme0n1 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:35.385 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:35.386 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:35.386 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.386 18:08:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:35.386 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.386 18:08:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.386 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.644 nvme0n1 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.644 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.645 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.903 nvme0n1 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:35.903 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.904 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.162 nvme0n1 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.162 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:36.163 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.163 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.422 18:08:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.422 nvme0n1 00:32:36.422 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.422 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.422 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.422 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.422 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.422 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.680 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.939 nvme0n1 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.939 18:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.504 nvme0n1 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.504 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.069 nvme0n1 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.069 18:08:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.634 nvme0n1 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.634 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.198 nvme0n1 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.198 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.199 18:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.764 nvme0n1 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE4NGEwOTI5ZTQyODA3YjE5YjllMmIxYWNjYTkxZWKonhkE: 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: ]] 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTlhOWU1NDIwNGE0YjAzMzFmNDc2YWUyYTk1YTU4YjM4ODM1ZmM2M2ZjN2Y5ZmNiYWNjMWVhZTg4YTIyN2Y2ZvAloaA=: 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.764 18:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.698 nvme0n1 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.698 18:08:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.630 nvme0n1 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGY5YmI0OWMxNDhiMTZmMmUzYmJiM2ZmNjIyNjYxYmYyQK+k: 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: ]] 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjFlZGMwNDgyODRhYTI1MjE5ZjMzNjc4OGY1ZTVmYmP10Ffd: 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.630 18:08:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.563 nvme0n1 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjZDA2YjU5YWUwMmZiNzA0ZjIwMTFlYzUxMDNkMWE3NTlkNDhmODUyYmUzYTE33N4iNQ==: 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: ]] 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2IzYzBjOGE4MjdhNTk0ZWJiMjBjYWUxYzAyZGVjYjaB1K4J: 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.563 18:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.497 nvme0n1 00:32:43.497 18:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.497 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.497 18:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.497 18:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.497 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.497 18:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.497 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.497 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.497 18:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.497 18:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTIzZjVlZTgxNDNmZjlkMjczMWY4ZjhiMzg4NjE4MThiZTNiY2RkODA0YTM0OGQxOGQ1NWVlNGVmMjQwOTgyNPzXCSA=: 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.754 18:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.755 18:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.755 18:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.755 18:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.755 18:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.755 18:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.755 18:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.755 18:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.755 18:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.755 18:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.755 18:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:43.755 18:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.755 18:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.701 nvme0n1 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmU1ZmYxNzdmZDFhYmM0ZTk1ZmJjZmRlYTlhODNmNTViOGVkYjE2ZmJlNzE5ZWFhivJAyw==: 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: ]] 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2M0OGZlMmNmNzkyYTYxMGIzYmRlYWQxNGRmZWViZDgyNTM3MDY0ZjIyZDg1NjBiYhvolQ==: 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.701 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.701 request: 00:32:44.701 { 00:32:44.701 "name": "nvme0", 00:32:44.701 "trtype": "tcp", 00:32:44.701 "traddr": "10.0.0.1", 00:32:44.702 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:44.702 "adrfam": "ipv4", 00:32:44.702 "trsvcid": "4420", 00:32:44.702 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:44.702 "method": "bdev_nvme_attach_controller", 00:32:44.702 "req_id": 1 00:32:44.702 } 00:32:44.702 Got JSON-RPC error response 00:32:44.702 response: 00:32:44.702 { 00:32:44.702 "code": -5, 00:32:44.702 "message": "Input/output error" 00:32:44.702 } 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.702 request: 00:32:44.702 { 00:32:44.702 "name": "nvme0", 00:32:44.702 "trtype": "tcp", 00:32:44.702 "traddr": "10.0.0.1", 00:32:44.702 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:44.702 "adrfam": "ipv4", 00:32:44.702 "trsvcid": "4420", 00:32:44.702 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:44.702 "dhchap_key": "key2", 00:32:44.702 "method": "bdev_nvme_attach_controller", 00:32:44.702 "req_id": 1 00:32:44.702 } 00:32:44.702 Got JSON-RPC error response 00:32:44.702 response: 00:32:44.702 { 00:32:44.702 "code": -5, 00:32:44.702 "message": "Input/output error" 00:32:44.702 } 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.702 request: 00:32:44.702 { 00:32:44.702 "name": "nvme0", 00:32:44.702 "trtype": "tcp", 00:32:44.702 "traddr": "10.0.0.1", 00:32:44.702 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:44.702 "adrfam": "ipv4", 00:32:44.702 "trsvcid": "4420", 00:32:44.702 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:44.702 "dhchap_key": "key1", 00:32:44.702 "dhchap_ctrlr_key": "ckey2", 00:32:44.702 "method": "bdev_nvme_attach_controller", 00:32:44.702 "req_id": 1 00:32:44.702 } 00:32:44.702 Got JSON-RPC error response 00:32:44.702 response: 00:32:44.702 { 00:32:44.702 "code": -5, 00:32:44.702 "message": "Input/output error" 00:32:44.702 } 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:44.702 rmmod nvme_tcp 00:32:44.702 rmmod nvme_fabrics 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1084326 ']' 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1084326 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 1084326 ']' 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 1084326 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1084326 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1084326' 00:32:44.702 killing process with pid 1084326 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 1084326 00:32:44.702 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 1084326 00:32:44.961 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:44.961 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:44.961 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:44.961 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:44.961 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:44.961 18:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.961 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:44.961 18:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.519 18:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:47.519 18:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:47.519 18:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:47.519 18:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:47.519 18:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:47.519 18:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:47.519 18:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:47.519 18:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:47.519 18:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:47.519 18:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:47.519 18:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:47.519 18:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:47.519 18:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:48.453 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:48.453 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:48.453 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:48.453 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:48.453 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:48.453 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:48.453 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:48.453 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:48.453 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:48.453 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:48.453 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:48.453 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:48.453 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:48.453 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:48.453 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:48.453 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:49.384 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:49.384 18:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Buy /tmp/spdk.key-null.aki /tmp/spdk.key-sha256.O6w /tmp/spdk.key-sha384.SAk /tmp/spdk.key-sha512.tDI /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:49.384 18:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:50.774 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:50.774 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:50.774 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:50.774 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:50.774 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:50.774 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:50.774 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:50.774 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:50.774 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:50.774 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:50.774 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:50.774 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:50.774 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:50.774 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:50.774 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:50.774 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:50.774 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:50.774 00:32:50.774 real 0m46.263s 00:32:50.774 user 0m44.255s 00:32:50.774 sys 0m5.455s 00:32:50.774 18:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:50.774 18:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.774 ************************************ 00:32:50.774 END TEST nvmf_auth_host 00:32:50.774 ************************************ 00:32:50.774 18:08:25 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:32:50.774 18:08:25 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:50.774 18:08:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:50.774 18:08:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:50.774 18:08:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:50.774 ************************************ 00:32:50.774 START TEST nvmf_digest 00:32:50.774 ************************************ 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:50.774 * Looking for test storage... 00:32:50.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:50.774 18:08:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:52.671 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:52.671 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:52.671 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:52.671 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:52.671 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.672 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:52.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:32:52.930 00:32:52.930 --- 10.0.0.2 ping statistics --- 00:32:52.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.930 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:32:52.930 00:32:52.930 --- 10.0.0.1 ping statistics --- 00:32:52.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.930 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:52.930 ************************************ 00:32:52.930 START TEST nvmf_digest_clean 00:32:52.930 ************************************ 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1093418 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1093418 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1093418 ']' 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:52.930 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:52.930 [2024-07-20 18:08:27.563150] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:52.930 [2024-07-20 18:08:27.563236] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.930 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.930 [2024-07-20 18:08:27.630939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.188 [2024-07-20 18:08:27.727593] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.188 [2024-07-20 18:08:27.727648] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.188 [2024-07-20 18:08:27.727664] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.188 [2024-07-20 18:08:27.727678] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.188 [2024-07-20 18:08:27.727689] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.188 [2024-07-20 18:08:27.727737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:53.188 null0 00:32:53.188 [2024-07-20 18:08:27.910710] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.188 [2024-07-20 18:08:27.934971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:53.188 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1093443 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1093443 /var/tmp/bperf.sock 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1093443 ']' 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:53.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:53.189 18:08:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:53.189 [2024-07-20 18:08:27.984064] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:53.189 [2024-07-20 18:08:27.984160] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093443 ] 00:32:53.447 EAL: No free 2048 kB hugepages reported on node 1 00:32:53.447 [2024-07-20 18:08:28.054208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.447 [2024-07-20 18:08:28.151045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.447 18:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:53.447 18:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:32:53.447 18:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:53.447 18:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:53.447 18:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:54.013 18:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:54.013 18:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:54.269 nvme0n1 00:32:54.269 18:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:54.269 18:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:54.269 Running I/O for 2 seconds... 00:32:56.790 00:32:56.790 Latency(us) 00:32:56.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.790 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:56.790 nvme0n1 : 2.01 19429.46 75.90 0.00 0.00 6576.99 3422.44 17087.91 00:32:56.790 =================================================================================================================== 00:32:56.790 Total : 19429.46 75.90 0.00 0.00 6576.99 3422.44 17087.91 00:32:56.790 0 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:56.790 | select(.opcode=="crc32c") 00:32:56.790 | "\(.module_name) \(.executed)"' 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1093443 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1093443 ']' 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1093443 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1093443 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1093443' 00:32:56.790 killing process with pid 1093443 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1093443 00:32:56.790 Received shutdown signal, test time was about 2.000000 seconds 00:32:56.790 00:32:56.790 Latency(us) 00:32:56.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.790 =================================================================================================================== 00:32:56.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1093443 00:32:56.790 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:56.791 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:56.791 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:56.791 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:56.791 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:56.791 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:56.791 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:56.791 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1093845 00:32:56.791 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:56.791 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1093845 /var/tmp/bperf.sock 00:32:56.791 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1093845 ']' 00:32:57.048 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:57.048 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:57.048 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:57.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:57.048 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:57.048 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:57.048 [2024-07-20 18:08:31.626059] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:57.048 [2024-07-20 18:08:31.626149] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093845 ] 00:32:57.048 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:57.048 Zero copy mechanism will not be used. 00:32:57.048 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.048 [2024-07-20 18:08:31.687632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.048 [2024-07-20 18:08:31.779520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.048 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:57.048 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:32:57.048 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:57.048 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:57.048 18:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:57.614 18:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:57.614 18:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:57.872 nvme0n1 00:32:57.872 18:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:57.872 18:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:57.872 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:57.872 Zero copy mechanism will not be used. 00:32:57.872 Running I/O for 2 seconds... 00:33:00.400 00:33:00.400 Latency(us) 00:33:00.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.400 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:00.400 nvme0n1 : 2.00 1748.50 218.56 0.00 0.00 9145.29 8738.13 12524.66 00:33:00.400 =================================================================================================================== 00:33:00.400 Total : 1748.50 218.56 0.00 0.00 9145.29 8738.13 12524.66 00:33:00.400 0 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:00.400 | select(.opcode=="crc32c") 00:33:00.400 | "\(.module_name) \(.executed)"' 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1093845 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1093845 ']' 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1093845 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1093845 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1093845' 00:33:00.400 killing process with pid 1093845 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1093845 00:33:00.400 Received shutdown signal, test time was about 2.000000 seconds 00:33:00.400 00:33:00.400 Latency(us) 00:33:00.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.400 =================================================================================================================== 00:33:00.400 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:00.400 18:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1093845 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1094256 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1094256 /var/tmp/bperf.sock 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1094256 ']' 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:00.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:00.400 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:00.400 [2024-07-20 18:08:35.174476] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:00.400 [2024-07-20 18:08:35.174568] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094256 ] 00:33:00.658 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.659 [2024-07-20 18:08:35.237519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.659 [2024-07-20 18:08:35.324596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.659 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:00.659 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:00.659 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:00.659 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:00.659 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:01.225 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:01.225 18:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:01.485 nvme0n1 00:33:01.485 18:08:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:01.485 18:08:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:01.485 Running I/O for 2 seconds... 00:33:03.454 00:33:03.454 Latency(us) 00:33:03.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.454 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:03.454 nvme0n1 : 2.01 20163.68 78.76 0.00 0.00 6333.29 2973.39 10437.21 00:33:03.454 =================================================================================================================== 00:33:03.454 Total : 20163.68 78.76 0.00 0.00 6333.29 2973.39 10437.21 00:33:03.454 0 00:33:03.454 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:03.454 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:03.454 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:03.454 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:03.454 | select(.opcode=="crc32c") 00:33:03.454 | "\(.module_name) \(.executed)"' 00:33:03.454 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1094256 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1094256 ']' 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1094256 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1094256 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1094256' 00:33:03.711 killing process with pid 1094256 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1094256 00:33:03.711 Received shutdown signal, test time was about 2.000000 seconds 00:33:03.711 00:33:03.711 Latency(us) 00:33:03.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.711 =================================================================================================================== 00:33:03.711 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:03.711 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1094256 00:33:03.992 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:03.992 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:03.992 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:03.992 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:03.992 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:03.992 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:03.992 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:03.992 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1094660 00:33:03.992 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:03.992 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1094660 /var/tmp/bperf.sock 00:33:03.993 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1094660 ']' 00:33:03.993 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:03.993 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:03.993 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:03.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:03.993 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:03.993 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:03.993 [2024-07-20 18:08:38.729702] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:03.993 [2024-07-20 18:08:38.729811] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094660 ] 00:33:03.993 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:03.993 Zero copy mechanism will not be used. 00:33:03.993 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.250 [2024-07-20 18:08:38.795395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.250 [2024-07-20 18:08:38.883070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.250 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:04.250 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:04.250 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:04.250 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:04.250 18:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:04.816 18:08:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:04.816 18:08:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.073 nvme0n1 00:33:05.073 18:08:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:05.073 18:08:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:05.330 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:05.330 Zero copy mechanism will not be used. 00:33:05.330 Running I/O for 2 seconds... 00:33:07.229 00:33:07.229 Latency(us) 00:33:07.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.229 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:07.229 nvme0n1 : 2.02 716.86 89.61 0.00 0.00 22194.51 7281.78 33010.73 00:33:07.229 =================================================================================================================== 00:33:07.229 Total : 716.86 89.61 0.00 0.00 22194.51 7281.78 33010.73 00:33:07.229 0 00:33:07.229 18:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:07.229 18:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:07.229 18:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:07.229 18:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:07.229 18:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:07.229 | select(.opcode=="crc32c") 00:33:07.229 | "\(.module_name) \(.executed)"' 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1094660 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1094660 ']' 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1094660 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1094660 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1094660' 00:33:07.487 killing process with pid 1094660 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1094660 00:33:07.487 Received shutdown signal, test time was about 2.000000 seconds 00:33:07.487 00:33:07.487 Latency(us) 00:33:07.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.487 =================================================================================================================== 00:33:07.487 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:07.487 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1094660 00:33:07.745 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1093418 00:33:07.745 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1093418 ']' 00:33:07.745 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1093418 00:33:07.745 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:07.745 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:07.745 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1093418 00:33:07.745 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:07.745 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:07.745 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1093418' 00:33:07.745 killing process with pid 1093418 00:33:07.745 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1093418 00:33:07.745 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1093418 00:33:08.003 00:33:08.003 real 0m15.138s 00:33:08.003 user 0m30.504s 00:33:08.003 sys 0m3.817s 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:08.003 ************************************ 00:33:08.003 END TEST nvmf_digest_clean 00:33:08.003 ************************************ 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:08.003 ************************************ 00:33:08.003 START TEST nvmf_digest_error 00:33:08.003 ************************************ 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1095210 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1095210 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1095210 ']' 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:08.003 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:08.003 [2024-07-20 18:08:42.746523] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:08.003 [2024-07-20 18:08:42.746611] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:08.003 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.262 [2024-07-20 18:08:42.812256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.262 [2024-07-20 18:08:42.899833] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:08.262 [2024-07-20 18:08:42.899893] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:08.262 [2024-07-20 18:08:42.899922] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:08.262 [2024-07-20 18:08:42.899934] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:08.262 [2024-07-20 18:08:42.899944] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:08.262 [2024-07-20 18:08:42.899972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:08.262 [2024-07-20 18:08:42.980586] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.262 18:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:08.520 null0 00:33:08.520 [2024-07-20 18:08:43.098689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.520 [2024-07-20 18:08:43.122970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1095236 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1095236 /var/tmp/bperf.sock 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1095236 ']' 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:08.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:08.520 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:08.520 [2024-07-20 18:08:43.168437] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:08.520 [2024-07-20 18:08:43.168516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095236 ] 00:33:08.520 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.520 [2024-07-20 18:08:43.231234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.777 [2024-07-20 18:08:43.323168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.777 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:08.777 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:08.777 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:08.777 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:09.034 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:09.034 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.034 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:09.034 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.034 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:09.034 18:08:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:09.292 nvme0n1 00:33:09.292 18:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:09.292 18:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.292 18:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:09.292 18:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.292 18:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:09.292 18:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:09.550 Running I/O for 2 seconds... 00:33:09.550 [2024-07-20 18:08:44.196242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.550 [2024-07-20 18:08:44.196305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.550 [2024-07-20 18:08:44.196326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.550 [2024-07-20 18:08:44.210109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.550 [2024-07-20 18:08:44.210159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.550 [2024-07-20 18:08:44.210176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.550 [2024-07-20 18:08:44.224751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.550 [2024-07-20 18:08:44.224783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.550 [2024-07-20 18:08:44.224809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.550 [2024-07-20 18:08:44.237807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.550 [2024-07-20 18:08:44.237853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.550 [2024-07-20 18:08:44.237872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.550 [2024-07-20 18:08:44.253245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.550 [2024-07-20 18:08:44.253274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.550 [2024-07-20 18:08:44.253307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.550 [2024-07-20 18:08:44.265742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.550 [2024-07-20 18:08:44.265770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.550 [2024-07-20 18:08:44.265812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.550 [2024-07-20 18:08:44.279284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.550 [2024-07-20 18:08:44.279312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.550 [2024-07-20 18:08:44.279345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.550 [2024-07-20 18:08:44.294166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.550 [2024-07-20 18:08:44.294196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.550 [2024-07-20 18:08:44.294229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.550 [2024-07-20 18:08:44.308101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.550 [2024-07-20 18:08:44.308131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.550 [2024-07-20 18:08:44.308163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.550 [2024-07-20 18:08:44.321089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.550 [2024-07-20 18:08:44.321119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.550 [2024-07-20 18:08:44.321135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.550 [2024-07-20 18:08:44.334545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.550 [2024-07-20 18:08:44.334573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.550 [2024-07-20 18:08:44.334604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.808 [2024-07-20 18:08:44.349875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.808 [2024-07-20 18:08:44.349905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.808 [2024-07-20 18:08:44.349922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.808 [2024-07-20 18:08:44.362786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.362835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.362852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.376627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.376672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.376690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.390832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.390876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.390901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.403848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.403878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.403894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.417752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.417781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.417807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.431426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.431456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.431487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.445494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.445524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.445541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.458416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.458454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.458483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.471755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.471806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.471824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.484895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.484925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.484943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.499843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.499875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.499892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.513281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.513331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.513348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.526679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.526725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.526741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.539978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.540007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.540024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.553399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.553429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.553462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.566906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.566936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.566953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.580012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.580042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.580058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.809 [2024-07-20 18:08:44.594425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:09.809 [2024-07-20 18:08:44.594455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.809 [2024-07-20 18:08:44.594487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.607416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.607447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.607465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.621317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.621348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.621372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.635083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.635115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.635132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.647974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.648004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.648036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.661996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.662026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.662042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.674549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.674577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.674608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.688836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.688865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.688882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.701929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.701958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.701975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.715856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.715901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.715918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.730961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.730991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.731007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.745144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.745180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.745197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.758121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.758151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.758168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.770843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.770873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.770889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.785180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.785209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.785241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.799399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.799427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.799460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.812695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.067 [2024-07-20 18:08:44.812722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.067 [2024-07-20 18:08:44.812753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.067 [2024-07-20 18:08:44.825483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.068 [2024-07-20 18:08:44.825512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.068 [2024-07-20 18:08:44.825545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.068 [2024-07-20 18:08:44.839018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.068 [2024-07-20 18:08:44.839047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.068 [2024-07-20 18:08:44.839064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.068 [2024-07-20 18:08:44.852534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.068 [2024-07-20 18:08:44.852565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.068 [2024-07-20 18:08:44.852582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.325 [2024-07-20 18:08:44.866926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.325 [2024-07-20 18:08:44.866972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-07-20 18:08:44.866988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.325 [2024-07-20 18:08:44.882595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.325 [2024-07-20 18:08:44.882624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-07-20 18:08:44.882656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.325 [2024-07-20 18:08:44.895828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.325 [2024-07-20 18:08:44.895872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-07-20 18:08:44.895888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.325 [2024-07-20 18:08:44.910164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.325 [2024-07-20 18:08:44.910194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-07-20 18:08:44.910226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.325 [2024-07-20 18:08:44.923446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.325 [2024-07-20 18:08:44.923474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-07-20 18:08:44.923491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.325 [2024-07-20 18:08:44.936077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.325 [2024-07-20 18:08:44.936121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-07-20 18:08:44.936138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.325 [2024-07-20 18:08:44.950809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.325 [2024-07-20 18:08:44.950838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.325 [2024-07-20 18:08:44.950855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.326 [2024-07-20 18:08:44.964398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.326 [2024-07-20 18:08:44.964427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.326 [2024-07-20 18:08:44.964457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.326 [2024-07-20 18:08:44.977125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.326 [2024-07-20 18:08:44.977154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.326 [2024-07-20 18:08:44.977193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.326 [2024-07-20 18:08:44.989693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.326 [2024-07-20 18:08:44.989721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.326 [2024-07-20 18:08:44.989753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.326 [2024-07-20 18:08:45.002776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.326 [2024-07-20 18:08:45.002829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.326 [2024-07-20 18:08:45.002846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.326 [2024-07-20 18:08:45.016516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.326 [2024-07-20 18:08:45.016544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.326 [2024-07-20 18:08:45.016577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.326 [2024-07-20 18:08:45.028225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.326 [2024-07-20 18:08:45.028253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.326 [2024-07-20 18:08:45.028285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.326 [2024-07-20 18:08:45.042158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.326 [2024-07-20 18:08:45.042186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.326 [2024-07-20 18:08:45.042219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.326 [2024-07-20 18:08:45.054644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.326 [2024-07-20 18:08:45.054672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.326 [2024-07-20 18:08:45.054704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.326 [2024-07-20 18:08:45.067159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.326 [2024-07-20 18:08:45.067187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.326 [2024-07-20 18:08:45.067218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.326 [2024-07-20 18:08:45.081013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.326 [2024-07-20 18:08:45.081043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.326 [2024-07-20 18:08:45.081060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.326 [2024-07-20 18:08:45.092907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.326 [2024-07-20 18:08:45.092943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.326 [2024-07-20 18:08:45.092975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.326 [2024-07-20 18:08:45.107174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.326 [2024-07-20 18:08:45.107203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.326 [2024-07-20 18:08:45.107236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.326 [2024-07-20 18:08:45.118124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.326 [2024-07-20 18:08:45.118168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.326 [2024-07-20 18:08:45.118185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.133060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.133105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.584 [2024-07-20 18:08:45.133122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.144940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.144984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.584 [2024-07-20 18:08:45.145001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.157373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.157401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.584 [2024-07-20 18:08:45.157432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.172102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.172131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.584 [2024-07-20 18:08:45.172162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.184211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.184239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.584 [2024-07-20 18:08:45.184272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.197695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.197723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.584 [2024-07-20 18:08:45.197755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.210275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.210303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.584 [2024-07-20 18:08:45.210336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.222659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.222687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.584 [2024-07-20 18:08:45.222718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.236574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.236603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.584 [2024-07-20 18:08:45.236620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.249047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.249074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.584 [2024-07-20 18:08:45.249105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.261349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.261379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.584 [2024-07-20 18:08:45.261395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.274623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.274651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.584 [2024-07-20 18:08:45.274683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.288706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.288735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.584 [2024-07-20 18:08:45.288768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.584 [2024-07-20 18:08:45.301774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.584 [2024-07-20 18:08:45.301810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.585 [2024-07-20 18:08:45.301843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.585 [2024-07-20 18:08:45.315364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.585 [2024-07-20 18:08:45.315398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.585 [2024-07-20 18:08:45.315431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.585 [2024-07-20 18:08:45.327421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.585 [2024-07-20 18:08:45.327449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.585 [2024-07-20 18:08:45.327480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.585 [2024-07-20 18:08:45.341033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.585 [2024-07-20 18:08:45.341061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.585 [2024-07-20 18:08:45.341093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.585 [2024-07-20 18:08:45.354574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.585 [2024-07-20 18:08:45.354602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.585 [2024-07-20 18:08:45.354617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.585 [2024-07-20 18:08:45.366868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.585 [2024-07-20 18:08:45.366897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.585 [2024-07-20 18:08:45.366929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.380040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.380070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.380086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.392973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.393001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.393033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.407168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.407196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.407227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.420442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.420471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.420502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.431744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.431772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.431814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.444949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.444977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.444993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.459582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.459610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.459643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.471681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.471713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.471731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.486010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.486041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.486057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.498590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.498620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.498637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.510902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.510930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.510961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.525047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.525075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.525108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.537856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.537898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.537920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.550073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.550101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.550134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.563373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.842 [2024-07-20 18:08:45.563416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.842 [2024-07-20 18:08:45.563433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.842 [2024-07-20 18:08:45.577513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.843 [2024-07-20 18:08:45.577543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.843 [2024-07-20 18:08:45.577559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.843 [2024-07-20 18:08:45.589901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.843 [2024-07-20 18:08:45.589931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.843 [2024-07-20 18:08:45.589963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.843 [2024-07-20 18:08:45.604323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.843 [2024-07-20 18:08:45.604352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.843 [2024-07-20 18:08:45.604383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.843 [2024-07-20 18:08:45.616213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.843 [2024-07-20 18:08:45.616245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.843 [2024-07-20 18:08:45.616264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.843 [2024-07-20 18:08:45.630778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:10.843 [2024-07-20 18:08:45.630815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.843 [2024-07-20 18:08:45.630832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.643303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.643333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.643365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.656343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.656378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.656410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.670154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.670184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.670216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.683408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.683435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.683466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.696569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.696598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.696614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.709092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.709121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.709152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.723270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.723300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.723331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.736481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.736510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.736541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.749562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.749591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.749622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.761535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.761564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.761597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.774774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.774826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.774844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.789675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.789718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.789734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.802637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.802680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.802696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.815279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.815307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.815339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.827483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.827511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.827542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.841497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.841524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.841555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.854352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.854395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.854412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.867323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.867352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.867384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.879959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.879998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.880037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.101 [2024-07-20 18:08:45.894602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.101 [2024-07-20 18:08:45.894631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.101 [2024-07-20 18:08:45.894648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:45.907113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:45.907156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:45.907173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:45.919646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:45.919674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:45.919691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:45.932115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:45.932143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:45.932173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:45.947247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:45.947274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:45.947290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:45.958702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:45.958730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:45.958760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:45.972777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:45.972830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:45.972847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:45.985251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:45.985294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:45.985311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:45.998254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:45.998283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:45.998301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:46.011728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:46.011758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:46.011775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:46.024919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:46.024946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:46.024977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:46.037544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:46.037573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:46.037606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:46.051704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:46.051733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:46.051765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:46.064424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:46.064453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:46.064486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:46.077689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:46.077718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:46.077750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:46.090881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:46.090910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:46.090941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:46.103276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:46.103319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:46.103340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:46.116445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:46.116490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:46.116506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:46.129268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:46.129295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:46.129327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.360 [2024-07-20 18:08:46.142561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.360 [2024-07-20 18:08:46.142589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.360 [2024-07-20 18:08:46.142621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.619 [2024-07-20 18:08:46.155984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.619 [2024-07-20 18:08:46.156028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.619 [2024-07-20 18:08:46.156046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.619 [2024-07-20 18:08:46.168569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.619 [2024-07-20 18:08:46.168599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.619 [2024-07-20 18:08:46.168616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.619 [2024-07-20 18:08:46.182044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e98d0) 00:33:11.619 [2024-07-20 18:08:46.182073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.619 [2024-07-20 18:08:46.182105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.619 00:33:11.619 Latency(us) 00:33:11.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.619 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:11.619 nvme0n1 : 2.01 19113.36 74.66 0.00 0.00 6686.66 3786.52 17670.45 00:33:11.619 =================================================================================================================== 00:33:11.619 Total : 19113.36 74.66 0.00 0.00 6686.66 3786.52 17670.45 00:33:11.619 0 00:33:11.619 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:11.619 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:11.619 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:11.619 | .driver_specific 00:33:11.619 | .nvme_error 00:33:11.619 | .status_code 00:33:11.619 | .command_transient_transport_error' 00:33:11.619 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:11.877 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 150 > 0 )) 00:33:11.877 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1095236 00:33:11.877 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1095236 ']' 00:33:11.877 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1095236 00:33:11.877 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:11.877 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:11.877 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1095236 00:33:11.877 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:11.877 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:11.877 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1095236' 00:33:11.877 killing process with pid 1095236 00:33:11.877 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1095236 00:33:11.877 Received shutdown signal, test time was about 2.000000 seconds 00:33:11.877 00:33:11.877 Latency(us) 00:33:11.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.877 =================================================================================================================== 00:33:11.877 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:11.877 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1095236 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1095642 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1095642 /var/tmp/bperf.sock 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1095642 ']' 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:12.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:12.135 18:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.135 [2024-07-20 18:08:46.751290] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:12.135 [2024-07-20 18:08:46.751371] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095642 ] 00:33:12.135 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:12.135 Zero copy mechanism will not be used. 00:33:12.135 EAL: No free 2048 kB hugepages reported on node 1 00:33:12.135 [2024-07-20 18:08:46.813411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.135 [2024-07-20 18:08:46.908820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.393 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:12.393 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:12.393 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:12.393 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:12.650 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:12.650 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.650 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.650 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.650 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:12.650 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:12.907 nvme0n1 00:33:12.907 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:12.907 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.907 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.907 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.907 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:12.907 18:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:12.907 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:12.907 Zero copy mechanism will not be used. 00:33:12.907 Running I/O for 2 seconds... 00:33:13.165 [2024-07-20 18:08:47.729816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.165 [2024-07-20 18:08:47.729883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.165 [2024-07-20 18:08:47.729904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.165 [2024-07-20 18:08:47.748708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.165 [2024-07-20 18:08:47.748755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.165 [2024-07-20 18:08:47.748772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.165 [2024-07-20 18:08:47.767425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.165 [2024-07-20 18:08:47.767470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.165 [2024-07-20 18:08:47.767486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.165 [2024-07-20 18:08:47.786146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.165 [2024-07-20 18:08:47.786189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.165 [2024-07-20 18:08:47.786205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.165 [2024-07-20 18:08:47.804708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.165 [2024-07-20 18:08:47.804753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.165 [2024-07-20 18:08:47.804770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.165 [2024-07-20 18:08:47.823322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.165 [2024-07-20 18:08:47.823350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.165 [2024-07-20 18:08:47.823366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.165 [2024-07-20 18:08:47.842288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.165 [2024-07-20 18:08:47.842330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.165 [2024-07-20 18:08:47.842346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.165 [2024-07-20 18:08:47.860899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.165 [2024-07-20 18:08:47.860927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.165 [2024-07-20 18:08:47.860959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.165 [2024-07-20 18:08:47.879317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.165 [2024-07-20 18:08:47.879361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.165 [2024-07-20 18:08:47.879378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.165 [2024-07-20 18:08:47.897824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.165 [2024-07-20 18:08:47.897868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.165 [2024-07-20 18:08:47.897884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.165 [2024-07-20 18:08:47.916906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.165 [2024-07-20 18:08:47.916951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.165 [2024-07-20 18:08:47.916968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.165 [2024-07-20 18:08:47.935670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.165 [2024-07-20 18:08:47.935713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.165 [2024-07-20 18:08:47.935734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.165 [2024-07-20 18:08:47.954361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.165 [2024-07-20 18:08:47.954404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.165 [2024-07-20 18:08:47.954420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:47.974064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:47.974107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:47.974124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:47.992673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:47.992717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:47.992733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:48.011371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:48.011415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:48.011431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:48.029830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:48.029875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:48.029891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:48.048633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:48.048676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:48.048692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:48.067141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:48.067182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:48.067198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:48.086180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:48.086223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:48.086238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:48.104868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:48.104915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:48.104932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:48.123535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:48.123577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:48.123593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:48.141837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:48.141881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:48.141897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:48.160532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:48.160558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:48.160574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:48.179343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:48.179387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:48.179402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:48.197811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:48.197853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:48.197868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.423 [2024-07-20 18:08:48.216859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.423 [2024-07-20 18:08:48.216887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.423 [2024-07-20 18:08:48.216904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.679 [2024-07-20 18:08:48.235682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.679 [2024-07-20 18:08:48.235725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.679 [2024-07-20 18:08:48.235741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.679 [2024-07-20 18:08:48.254321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.679 [2024-07-20 18:08:48.254365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.679 [2024-07-20 18:08:48.254381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.679 [2024-07-20 18:08:48.273187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.679 [2024-07-20 18:08:48.273228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.679 [2024-07-20 18:08:48.273244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.679 [2024-07-20 18:08:48.292060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.679 [2024-07-20 18:08:48.292103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.679 [2024-07-20 18:08:48.292119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.679 [2024-07-20 18:08:48.310511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.679 [2024-07-20 18:08:48.310553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.679 [2024-07-20 18:08:48.310568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.679 [2024-07-20 18:08:48.329263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.679 [2024-07-20 18:08:48.329305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.679 [2024-07-20 18:08:48.329321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.679 [2024-07-20 18:08:48.347705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.679 [2024-07-20 18:08:48.347748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.679 [2024-07-20 18:08:48.347763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.679 [2024-07-20 18:08:48.366217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.679 [2024-07-20 18:08:48.366259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.679 [2024-07-20 18:08:48.366275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.680 [2024-07-20 18:08:48.384928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.680 [2024-07-20 18:08:48.384970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.680 [2024-07-20 18:08:48.384985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.680 [2024-07-20 18:08:48.403671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.680 [2024-07-20 18:08:48.403699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.680 [2024-07-20 18:08:48.403731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.680 [2024-07-20 18:08:48.422113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.680 [2024-07-20 18:08:48.422140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.680 [2024-07-20 18:08:48.422176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.680 [2024-07-20 18:08:48.440703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.680 [2024-07-20 18:08:48.440747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.680 [2024-07-20 18:08:48.440762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.680 [2024-07-20 18:08:48.459273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.680 [2024-07-20 18:08:48.459301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.680 [2024-07-20 18:08:48.459332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.478569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.478596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.478628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.497473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.497516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.497531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.516088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.516129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.516145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.535498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.535525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.535556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.554223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.554251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.554267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.572862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.572890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.572920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.591592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.591640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.591657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.610135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.610163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.610194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.628494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.628536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.628553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.647072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.647098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.647129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.665825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.665867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.665883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.684254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.684298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.684315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.702841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.702885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.702901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.937 [2024-07-20 18:08:48.721209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:13.937 [2024-07-20 18:08:48.721252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.937 [2024-07-20 18:08:48.721268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.195 [2024-07-20 18:08:48.740545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.195 [2024-07-20 18:08:48.740587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.195 [2024-07-20 18:08:48.740603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.195 [2024-07-20 18:08:48.759015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.195 [2024-07-20 18:08:48.759058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.195 [2024-07-20 18:08:48.759074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.195 [2024-07-20 18:08:48.777520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.195 [2024-07-20 18:08:48.777563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.195 [2024-07-20 18:08:48.777578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.195 [2024-07-20 18:08:48.796268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.195 [2024-07-20 18:08:48.796295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.195 [2024-07-20 18:08:48.796326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.195 [2024-07-20 18:08:48.814856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.195 [2024-07-20 18:08:48.814884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.195 [2024-07-20 18:08:48.814900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.195 [2024-07-20 18:08:48.833561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.195 [2024-07-20 18:08:48.833588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.195 [2024-07-20 18:08:48.833619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.195 [2024-07-20 18:08:48.852132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.195 [2024-07-20 18:08:48.852158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.195 [2024-07-20 18:08:48.852174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.195 [2024-07-20 18:08:48.870522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.195 [2024-07-20 18:08:48.870549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.195 [2024-07-20 18:08:48.870580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.196 [2024-07-20 18:08:48.889067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.196 [2024-07-20 18:08:48.889111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.196 [2024-07-20 18:08:48.889127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.196 [2024-07-20 18:08:48.907990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.196 [2024-07-20 18:08:48.908038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.196 [2024-07-20 18:08:48.908055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.196 [2024-07-20 18:08:48.926381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.196 [2024-07-20 18:08:48.926409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.196 [2024-07-20 18:08:48.926440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.196 [2024-07-20 18:08:48.945214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.196 [2024-07-20 18:08:48.945242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.196 [2024-07-20 18:08:48.945274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.196 [2024-07-20 18:08:48.963732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.196 [2024-07-20 18:08:48.963760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.196 [2024-07-20 18:08:48.963800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.196 [2024-07-20 18:08:48.982489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.196 [2024-07-20 18:08:48.982531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.196 [2024-07-20 18:08:48.982547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.453 [2024-07-20 18:08:49.001889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.453 [2024-07-20 18:08:49.001918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.453 [2024-07-20 18:08:49.001935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.453 [2024-07-20 18:08:49.020348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.453 [2024-07-20 18:08:49.020391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.453 [2024-07-20 18:08:49.020407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.453 [2024-07-20 18:08:49.038875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.453 [2024-07-20 18:08:49.038918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.454 [2024-07-20 18:08:49.038933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.454 [2024-07-20 18:08:49.058837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.454 [2024-07-20 18:08:49.058882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.454 [2024-07-20 18:08:49.058900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.454 [2024-07-20 18:08:49.077501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.454 [2024-07-20 18:08:49.077545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.454 [2024-07-20 18:08:49.077561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.454 [2024-07-20 18:08:49.096387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.454 [2024-07-20 18:08:49.096430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.454 [2024-07-20 18:08:49.096445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.454 [2024-07-20 18:08:49.115868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.454 [2024-07-20 18:08:49.115900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.454 [2024-07-20 18:08:49.115916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.454 [2024-07-20 18:08:49.135023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.454 [2024-07-20 18:08:49.135053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.454 [2024-07-20 18:08:49.135069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.454 [2024-07-20 18:08:49.153989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.454 [2024-07-20 18:08:49.154018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.454 [2024-07-20 18:08:49.154035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.454 [2024-07-20 18:08:49.172928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.454 [2024-07-20 18:08:49.172972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.454 [2024-07-20 18:08:49.172988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.454 [2024-07-20 18:08:49.192159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.454 [2024-07-20 18:08:49.192205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.454 [2024-07-20 18:08:49.192220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.454 [2024-07-20 18:08:49.210911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.454 [2024-07-20 18:08:49.210954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.454 [2024-07-20 18:08:49.210969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.454 [2024-07-20 18:08:49.229711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.454 [2024-07-20 18:08:49.229753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.454 [2024-07-20 18:08:49.229774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.454 [2024-07-20 18:08:49.248462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.454 [2024-07-20 18:08:49.248505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.454 [2024-07-20 18:08:49.248520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.711 [2024-07-20 18:08:49.267243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.711 [2024-07-20 18:08:49.267271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.711 [2024-07-20 18:08:49.267301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.711 [2024-07-20 18:08:49.285827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.711 [2024-07-20 18:08:49.285870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.711 [2024-07-20 18:08:49.285886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.711 [2024-07-20 18:08:49.304655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.711 [2024-07-20 18:08:49.304697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.711 [2024-07-20 18:08:49.304713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.711 [2024-07-20 18:08:49.323121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.711 [2024-07-20 18:08:49.323164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.711 [2024-07-20 18:08:49.323179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.711 [2024-07-20 18:08:49.341820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.711 [2024-07-20 18:08:49.341862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.711 [2024-07-20 18:08:49.341877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.711 [2024-07-20 18:08:49.360390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.711 [2024-07-20 18:08:49.360434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.711 [2024-07-20 18:08:49.360450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.711 [2024-07-20 18:08:49.378991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.711 [2024-07-20 18:08:49.379019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.712 [2024-07-20 18:08:49.379035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.712 [2024-07-20 18:08:49.397772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.712 [2024-07-20 18:08:49.397812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.712 [2024-07-20 18:08:49.397829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.712 [2024-07-20 18:08:49.416526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.712 [2024-07-20 18:08:49.416568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.712 [2024-07-20 18:08:49.416584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.712 [2024-07-20 18:08:49.434983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.712 [2024-07-20 18:08:49.435010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.712 [2024-07-20 18:08:49.435025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.712 [2024-07-20 18:08:49.453249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.712 [2024-07-20 18:08:49.453276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.712 [2024-07-20 18:08:49.453306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.712 [2024-07-20 18:08:49.471870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.712 [2024-07-20 18:08:49.471912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.712 [2024-07-20 18:08:49.471928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.712 [2024-07-20 18:08:49.490808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.712 [2024-07-20 18:08:49.490851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.712 [2024-07-20 18:08:49.490867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.969 [2024-07-20 18:08:49.510014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.969 [2024-07-20 18:08:49.510043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.969 [2024-07-20 18:08:49.510060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.969 [2024-07-20 18:08:49.528398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.969 [2024-07-20 18:08:49.528441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.969 [2024-07-20 18:08:49.528456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.969 [2024-07-20 18:08:49.546787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.969 [2024-07-20 18:08:49.546821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.969 [2024-07-20 18:08:49.546853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.969 [2024-07-20 18:08:49.565228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.969 [2024-07-20 18:08:49.565271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.969 [2024-07-20 18:08:49.565287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.969 [2024-07-20 18:08:49.583719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.969 [2024-07-20 18:08:49.583759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.969 [2024-07-20 18:08:49.583774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.970 [2024-07-20 18:08:49.602550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.970 [2024-07-20 18:08:49.602593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.970 [2024-07-20 18:08:49.602609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.970 [2024-07-20 18:08:49.621119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.970 [2024-07-20 18:08:49.621162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.970 [2024-07-20 18:08:49.621177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.970 [2024-07-20 18:08:49.639895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.970 [2024-07-20 18:08:49.639937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.970 [2024-07-20 18:08:49.639953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.970 [2024-07-20 18:08:49.658186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.970 [2024-07-20 18:08:49.658213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.970 [2024-07-20 18:08:49.658229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.970 [2024-07-20 18:08:49.676526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.970 [2024-07-20 18:08:49.676569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.970 [2024-07-20 18:08:49.676584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.970 [2024-07-20 18:08:49.695066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.970 [2024-07-20 18:08:49.695107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.970 [2024-07-20 18:08:49.695122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.970 [2024-07-20 18:08:49.713321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x86f2c0) 00:33:14.970 [2024-07-20 18:08:49.713349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.970 [2024-07-20 18:08:49.713386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.970 00:33:14.970 Latency(us) 00:33:14.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.970 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:14.970 nvme0n1 : 2.01 1651.77 206.47 0.00 0.00 9677.74 8980.86 20000.62 00:33:14.970 =================================================================================================================== 00:33:14.970 Total : 1651.77 206.47 0.00 0.00 9677.74 8980.86 20000.62 00:33:14.970 0 00:33:14.970 18:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:14.970 18:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:14.970 18:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:14.970 18:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:14.970 | .driver_specific 00:33:14.970 | .nvme_error 00:33:14.970 | .status_code 00:33:14.970 | .command_transient_transport_error' 00:33:15.227 18:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 107 > 0 )) 00:33:15.227 18:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1095642 00:33:15.227 18:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1095642 ']' 00:33:15.227 18:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1095642 00:33:15.227 18:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:15.227 18:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:15.227 18:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1095642 00:33:15.227 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:15.227 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:15.227 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1095642' 00:33:15.227 killing process with pid 1095642 00:33:15.227 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1095642 00:33:15.227 Received shutdown signal, test time was about 2.000000 seconds 00:33:15.227 00:33:15.227 Latency(us) 00:33:15.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.227 =================================================================================================================== 00:33:15.227 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:15.227 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1095642 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1096090 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1096090 /var/tmp/bperf.sock 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1096090 ']' 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:15.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:15.485 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:15.485 [2024-07-20 18:08:50.272906] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:15.485 [2024-07-20 18:08:50.272987] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1096090 ] 00:33:15.768 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.768 [2024-07-20 18:08:50.338237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.768 [2024-07-20 18:08:50.428630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.768 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:15.768 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:15.768 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:15.768 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:16.049 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:16.049 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.049 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:16.049 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.049 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:16.049 18:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:16.615 nvme0n1 00:33:16.615 18:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:16.615 18:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.615 18:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:16.615 18:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.615 18:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:16.615 18:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:16.615 Running I/O for 2 seconds... 00:33:16.615 [2024-07-20 18:08:51.398223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f81e0 00:33:16.615 [2024-07-20 18:08:51.400607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.615 [2024-07-20 18:08:51.400660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.415843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.416268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.416301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.433466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.433915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.433944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.450848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.451275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.451307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.468129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.468586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.468616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.485586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.486054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.486084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.502874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.503336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.503367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.519869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.520311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.520342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.537202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.537642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.537672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.554301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.554743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.554789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.570540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.570971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.570998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.586807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.587247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.587276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.602705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.603113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.603141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.618283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.618678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.618705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.633750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.634183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.872 [2024-07-20 18:08:51.634211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.872 [2024-07-20 18:08:51.649384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.872 [2024-07-20 18:08:51.649815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.873 [2024-07-20 18:08:51.649842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:16.873 [2024-07-20 18:08:51.664886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:16.873 [2024-07-20 18:08:51.665296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:16.873 [2024-07-20 18:08:51.665322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.130 [2024-07-20 18:08:51.680748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.130 [2024-07-20 18:08:51.681197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.130 [2024-07-20 18:08:51.681223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.130 [2024-07-20 18:08:51.696941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.130 [2024-07-20 18:08:51.697348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.130 [2024-07-20 18:08:51.697389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.130 [2024-07-20 18:08:51.713129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.130 [2024-07-20 18:08:51.713578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.130 [2024-07-20 18:08:51.713618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.130 [2024-07-20 18:08:51.729317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.130 [2024-07-20 18:08:51.729778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.130 [2024-07-20 18:08:51.729814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.130 [2024-07-20 18:08:51.745578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.130 [2024-07-20 18:08:51.745995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.130 [2024-07-20 18:08:51.746023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.130 [2024-07-20 18:08:51.762167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.130 [2024-07-20 18:08:51.762618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.130 [2024-07-20 18:08:51.762644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.130 [2024-07-20 18:08:51.778122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.130 [2024-07-20 18:08:51.778585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.131 [2024-07-20 18:08:51.778626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.131 [2024-07-20 18:08:51.794329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.131 [2024-07-20 18:08:51.794765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.131 [2024-07-20 18:08:51.794813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.131 [2024-07-20 18:08:51.810465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.131 [2024-07-20 18:08:51.810897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.131 [2024-07-20 18:08:51.810924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.131 [2024-07-20 18:08:51.826492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.131 [2024-07-20 18:08:51.826930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.131 [2024-07-20 18:08:51.826962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.131 [2024-07-20 18:08:51.842595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.131 [2024-07-20 18:08:51.843040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.131 [2024-07-20 18:08:51.843067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.131 [2024-07-20 18:08:51.858611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.131 [2024-07-20 18:08:51.859041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.131 [2024-07-20 18:08:51.859084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.131 [2024-07-20 18:08:51.874624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.131 [2024-07-20 18:08:51.875040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.131 [2024-07-20 18:08:51.875068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.131 [2024-07-20 18:08:51.890925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.131 [2024-07-20 18:08:51.891374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.131 [2024-07-20 18:08:51.891401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.131 [2024-07-20 18:08:51.907244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.131 [2024-07-20 18:08:51.907706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.131 [2024-07-20 18:08:51.907747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.131 [2024-07-20 18:08:51.923453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.131 [2024-07-20 18:08:51.923904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.131 [2024-07-20 18:08:51.923933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.388 [2024-07-20 18:08:51.939416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.388 [2024-07-20 18:08:51.939850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.388 [2024-07-20 18:08:51.939878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.388 [2024-07-20 18:08:51.955692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.388 [2024-07-20 18:08:51.956140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.388 [2024-07-20 18:08:51.956181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.388 [2024-07-20 18:08:51.971905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.389 [2024-07-20 18:08:51.972339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.389 [2024-07-20 18:08:51.972366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.389 [2024-07-20 18:08:51.988525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.389 [2024-07-20 18:08:51.989008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.389 [2024-07-20 18:08:51.989036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.389 [2024-07-20 18:08:52.004878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.389 [2024-07-20 18:08:52.005312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.389 [2024-07-20 18:08:52.005339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.389 [2024-07-20 18:08:52.021303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.389 [2024-07-20 18:08:52.021732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.389 [2024-07-20 18:08:52.021759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.389 [2024-07-20 18:08:52.037699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.389 [2024-07-20 18:08:52.038141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.389 [2024-07-20 18:08:52.038168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.389 [2024-07-20 18:08:52.054213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.389 [2024-07-20 18:08:52.054686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.389 [2024-07-20 18:08:52.054712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.389 [2024-07-20 18:08:52.070547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.389 [2024-07-20 18:08:52.071014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.389 [2024-07-20 18:08:52.071041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.389 [2024-07-20 18:08:52.087095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.389 [2024-07-20 18:08:52.087548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.389 [2024-07-20 18:08:52.087590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.389 [2024-07-20 18:08:52.103581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.389 [2024-07-20 18:08:52.104036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.389 [2024-07-20 18:08:52.104063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.389 [2024-07-20 18:08:52.120342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.389 [2024-07-20 18:08:52.120804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.389 [2024-07-20 18:08:52.120832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.389 [2024-07-20 18:08:52.136985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.389 [2024-07-20 18:08:52.137453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.389 [2024-07-20 18:08:52.137479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.389 [2024-07-20 18:08:52.153821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.389 [2024-07-20 18:08:52.154257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.389 [2024-07-20 18:08:52.154283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.389 [2024-07-20 18:08:52.170701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.389 [2024-07-20 18:08:52.171144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.389 [2024-07-20 18:08:52.171171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.647 [2024-07-20 18:08:52.187217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.647 [2024-07-20 18:08:52.187629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.647 [2024-07-20 18:08:52.187655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.647 [2024-07-20 18:08:52.203720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.647 [2024-07-20 18:08:52.204158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.647 [2024-07-20 18:08:52.204184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.647 [2024-07-20 18:08:52.220101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.647 [2024-07-20 18:08:52.220530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.647 [2024-07-20 18:08:52.220570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.647 [2024-07-20 18:08:52.236554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.647 [2024-07-20 18:08:52.237006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.647 [2024-07-20 18:08:52.237049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.647 [2024-07-20 18:08:52.253198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.647 [2024-07-20 18:08:52.253655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.647 [2024-07-20 18:08:52.253686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.647 [2024-07-20 18:08:52.269630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.647 [2024-07-20 18:08:52.270063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.647 [2024-07-20 18:08:52.270090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.647 [2024-07-20 18:08:52.286066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.647 [2024-07-20 18:08:52.286490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.647 [2024-07-20 18:08:52.286517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.647 [2024-07-20 18:08:52.302570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.647 [2024-07-20 18:08:52.303009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.647 [2024-07-20 18:08:52.303051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.647 [2024-07-20 18:08:52.319044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.647 [2024-07-20 18:08:52.319462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.647 [2024-07-20 18:08:52.319490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.647 [2024-07-20 18:08:52.335508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.647 [2024-07-20 18:08:52.335921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.647 [2024-07-20 18:08:52.335948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.647 [2024-07-20 18:08:52.352158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.647 [2024-07-20 18:08:52.352630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.647 [2024-07-20 18:08:52.352671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.647 [2024-07-20 18:08:52.368688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.647 [2024-07-20 18:08:52.369131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.647 [2024-07-20 18:08:52.369158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.647 [2024-07-20 18:08:52.385128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.647 [2024-07-20 18:08:52.385537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.647 [2024-07-20 18:08:52.385564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.648 [2024-07-20 18:08:52.401565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.648 [2024-07-20 18:08:52.401984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.648 [2024-07-20 18:08:52.402012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.648 [2024-07-20 18:08:52.417933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.648 [2024-07-20 18:08:52.418392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.648 [2024-07-20 18:08:52.418433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.648 [2024-07-20 18:08:52.434500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.648 [2024-07-20 18:08:52.434912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.648 [2024-07-20 18:08:52.434940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.905 [2024-07-20 18:08:52.450932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.905 [2024-07-20 18:08:52.451328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.905 [2024-07-20 18:08:52.451370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.905 [2024-07-20 18:08:52.467460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.905 [2024-07-20 18:08:52.467863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.905 [2024-07-20 18:08:52.467891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.905 [2024-07-20 18:08:52.483920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.905 [2024-07-20 18:08:52.484353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.905 [2024-07-20 18:08:52.484395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.905 [2024-07-20 18:08:52.500601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.905 [2024-07-20 18:08:52.501040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.905 [2024-07-20 18:08:52.501066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.905 [2024-07-20 18:08:52.517192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.905 [2024-07-20 18:08:52.517573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.905 [2024-07-20 18:08:52.517615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.905 [2024-07-20 18:08:52.533832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.905 [2024-07-20 18:08:52.534269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.905 [2024-07-20 18:08:52.534296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.905 [2024-07-20 18:08:52.550253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.906 [2024-07-20 18:08:52.550683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.906 [2024-07-20 18:08:52.550715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.906 [2024-07-20 18:08:52.566696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.906 [2024-07-20 18:08:52.567119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.906 [2024-07-20 18:08:52.567161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.906 [2024-07-20 18:08:52.583100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.906 [2024-07-20 18:08:52.583569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.906 [2024-07-20 18:08:52.583596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.906 [2024-07-20 18:08:52.599684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.906 [2024-07-20 18:08:52.600161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.906 [2024-07-20 18:08:52.600188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.906 [2024-07-20 18:08:52.616275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.906 [2024-07-20 18:08:52.616710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.906 [2024-07-20 18:08:52.616737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.906 [2024-07-20 18:08:52.632627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.906 [2024-07-20 18:08:52.633064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.906 [2024-07-20 18:08:52.633090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.906 [2024-07-20 18:08:52.649168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.906 [2024-07-20 18:08:52.649604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.906 [2024-07-20 18:08:52.649631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.906 [2024-07-20 18:08:52.665783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.906 [2024-07-20 18:08:52.666219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.906 [2024-07-20 18:08:52.666246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.906 [2024-07-20 18:08:52.682319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.906 [2024-07-20 18:08:52.682783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.906 [2024-07-20 18:08:52.682823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:17.906 [2024-07-20 18:08:52.698836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:17.906 [2024-07-20 18:08:52.699258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:17.906 [2024-07-20 18:08:52.699286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.715143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.715602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.715629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.731551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.731977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.732005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.748243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.748706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.748732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.764927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.765365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.765391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.781461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.781924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.781953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.798030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.798460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.798490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.814493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.814919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.814948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.831163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.831596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.831624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.847907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.848319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.848346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.864536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.864960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.864988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.881392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.881821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.881849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.898034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.898461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.898488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.914580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.915010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.915037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.931011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.931421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.931467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.163 [2024-07-20 18:08:52.947755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.163 [2024-07-20 18:08:52.948186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.163 [2024-07-20 18:08:52.948228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.420 [2024-07-20 18:08:52.964161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.420 [2024-07-20 18:08:52.964604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.420 [2024-07-20 18:08:52.964632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.420 [2024-07-20 18:08:52.980537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.420 [2024-07-20 18:08:52.980944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.420 [2024-07-20 18:08:52.980990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.420 [2024-07-20 18:08:52.996920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.420 [2024-07-20 18:08:52.997312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.420 [2024-07-20 18:08:52.997354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.420 [2024-07-20 18:08:53.012943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.420 [2024-07-20 18:08:53.013336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.420 [2024-07-20 18:08:53.013378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.420 [2024-07-20 18:08:53.029374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.420 [2024-07-20 18:08:53.029806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.420 [2024-07-20 18:08:53.029834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.420 [2024-07-20 18:08:53.045829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.420 [2024-07-20 18:08:53.046252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.420 [2024-07-20 18:08:53.046278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.420 [2024-07-20 18:08:53.062208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.420 [2024-07-20 18:08:53.062618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.421 [2024-07-20 18:08:53.062659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.421 [2024-07-20 18:08:53.078575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.421 [2024-07-20 18:08:53.079028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.421 [2024-07-20 18:08:53.079057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.421 [2024-07-20 18:08:53.094978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.421 [2024-07-20 18:08:53.095386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.421 [2024-07-20 18:08:53.095413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.421 [2024-07-20 18:08:53.111406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.421 [2024-07-20 18:08:53.111830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.421 [2024-07-20 18:08:53.111866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.421 [2024-07-20 18:08:53.127932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.421 [2024-07-20 18:08:53.128342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.421 [2024-07-20 18:08:53.128370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.421 [2024-07-20 18:08:53.144534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.421 [2024-07-20 18:08:53.144986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.421 [2024-07-20 18:08:53.145015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.421 [2024-07-20 18:08:53.161300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.421 [2024-07-20 18:08:53.161755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.421 [2024-07-20 18:08:53.161784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.421 [2024-07-20 18:08:53.177961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.421 [2024-07-20 18:08:53.178393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.421 [2024-07-20 18:08:53.178422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.421 [2024-07-20 18:08:53.194604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.421 [2024-07-20 18:08:53.195036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.421 [2024-07-20 18:08:53.195064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.421 [2024-07-20 18:08:53.211064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.421 [2024-07-20 18:08:53.211507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.421 [2024-07-20 18:08:53.211537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.678 [2024-07-20 18:08:53.227356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.678 [2024-07-20 18:08:53.227782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.678 [2024-07-20 18:08:53.227818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.678 [2024-07-20 18:08:53.243486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.678 [2024-07-20 18:08:53.243934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.678 [2024-07-20 18:08:53.243962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.678 [2024-07-20 18:08:53.260247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.678 [2024-07-20 18:08:53.260675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.678 [2024-07-20 18:08:53.260717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.678 [2024-07-20 18:08:53.276786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.678 [2024-07-20 18:08:53.277219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.678 [2024-07-20 18:08:53.277247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.678 [2024-07-20 18:08:53.293494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.678 [2024-07-20 18:08:53.293926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.678 [2024-07-20 18:08:53.293954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.678 [2024-07-20 18:08:53.309890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.678 [2024-07-20 18:08:53.310276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.678 [2024-07-20 18:08:53.310318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.678 [2024-07-20 18:08:53.325950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.678 [2024-07-20 18:08:53.326400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.678 [2024-07-20 18:08:53.326427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.678 [2024-07-20 18:08:53.342339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.678 [2024-07-20 18:08:53.342807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.678 [2024-07-20 18:08:53.342852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.678 [2024-07-20 18:08:53.358828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.678 [2024-07-20 18:08:53.359239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.678 [2024-07-20 18:08:53.359266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.678 [2024-07-20 18:08:53.374895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4d910) with pdu=0x2000190f4b08 00:33:18.678 [2024-07-20 18:08:53.375272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:18.678 [2024-07-20 18:08:53.375298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:18.678 00:33:18.678 Latency(us) 00:33:18.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.678 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:18.678 nvme0n1 : 2.01 15376.86 60.07 0.00 0.00 8302.63 5655.51 24855.13 00:33:18.678 =================================================================================================================== 00:33:18.678 Total : 15376.86 60.07 0.00 0.00 8302.63 5655.51 24855.13 00:33:18.678 0 00:33:18.678 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:18.678 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:18.678 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:18.678 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:18.678 | .driver_specific 00:33:18.678 | .nvme_error 00:33:18.678 | .status_code 00:33:18.678 | .command_transient_transport_error' 00:33:18.934 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 121 > 0 )) 00:33:18.934 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1096090 00:33:18.934 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1096090 ']' 00:33:18.934 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1096090 00:33:18.934 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:18.934 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:18.934 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1096090 00:33:18.934 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:18.934 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:18.934 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1096090' 00:33:18.934 killing process with pid 1096090 00:33:18.934 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1096090 00:33:18.934 Received shutdown signal, test time was about 2.000000 seconds 00:33:18.934 00:33:18.934 Latency(us) 00:33:18.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.934 =================================================================================================================== 00:33:18.934 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:18.934 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1096090 00:33:19.220 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:19.220 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:19.220 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:19.220 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:19.221 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:19.221 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1096575 00:33:19.221 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:19.221 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1096575 /var/tmp/bperf.sock 00:33:19.221 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1096575 ']' 00:33:19.221 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:19.221 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:19.221 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:19.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:19.221 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:19.221 18:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:19.221 [2024-07-20 18:08:53.935283] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:19.221 [2024-07-20 18:08:53.935373] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1096575 ] 00:33:19.221 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:19.221 Zero copy mechanism will not be used. 00:33:19.221 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.221 [2024-07-20 18:08:53.995444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.478 [2024-07-20 18:08:54.080657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.478 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:19.478 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:19.478 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:19.478 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:19.736 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:19.737 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.737 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:19.737 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.737 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.737 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:20.301 nvme0n1 00:33:20.301 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:20.301 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.301 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:20.301 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.301 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:20.301 18:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:20.301 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:20.301 Zero copy mechanism will not be used. 00:33:20.301 Running I/O for 2 seconds... 00:33:20.301 [2024-07-20 18:08:54.984583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.301 [2024-07-20 18:08:54.985522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.301 [2024-07-20 18:08:54.985578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.301 [2024-07-20 18:08:55.015177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.301 [2024-07-20 18:08:55.015807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.302 [2024-07-20 18:08:55.015862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.302 [2024-07-20 18:08:55.048315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.302 [2024-07-20 18:08:55.049032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.302 [2024-07-20 18:08:55.049061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.302 [2024-07-20 18:08:55.082557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.302 [2024-07-20 18:08:55.083633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.302 [2024-07-20 18:08:55.083662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.560 [2024-07-20 18:08:55.115863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.560 [2024-07-20 18:08:55.116930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.560 [2024-07-20 18:08:55.116974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.560 [2024-07-20 18:08:55.151058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.560 [2024-07-20 18:08:55.152024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.560 [2024-07-20 18:08:55.152078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.560 [2024-07-20 18:08:55.184161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.560 [2024-07-20 18:08:55.184631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.560 [2024-07-20 18:08:55.184677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.560 [2024-07-20 18:08:55.216719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.560 [2024-07-20 18:08:55.217667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.560 [2024-07-20 18:08:55.217697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.560 [2024-07-20 18:08:55.247092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.560 [2024-07-20 18:08:55.247595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.560 [2024-07-20 18:08:55.247642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.560 [2024-07-20 18:08:55.280022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.560 [2024-07-20 18:08:55.281258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.560 [2024-07-20 18:08:55.281303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.560 [2024-07-20 18:08:55.316186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.560 [2024-07-20 18:08:55.316944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.560 [2024-07-20 18:08:55.317002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.560 [2024-07-20 18:08:55.353171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.560 [2024-07-20 18:08:55.354073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.560 [2024-07-20 18:08:55.354106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.819 [2024-07-20 18:08:55.385565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.819 [2024-07-20 18:08:55.386458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.819 [2024-07-20 18:08:55.386505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.819 [2024-07-20 18:08:55.420115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.819 [2024-07-20 18:08:55.421008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.819 [2024-07-20 18:08:55.421053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.819 [2024-07-20 18:08:55.457182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.819 [2024-07-20 18:08:55.458208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.819 [2024-07-20 18:08:55.458254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.819 [2024-07-20 18:08:55.492169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.819 [2024-07-20 18:08:55.492636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.819 [2024-07-20 18:08:55.492681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.819 [2024-07-20 18:08:55.526120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.819 [2024-07-20 18:08:55.526990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.819 [2024-07-20 18:08:55.527036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.819 [2024-07-20 18:08:55.555550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.819 [2024-07-20 18:08:55.556290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.819 [2024-07-20 18:08:55.556334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.819 [2024-07-20 18:08:55.588728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:20.819 [2024-07-20 18:08:55.589603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.819 [2024-07-20 18:08:55.589633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.077 [2024-07-20 18:08:55.622670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.077 [2024-07-20 18:08:55.623426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.077 [2024-07-20 18:08:55.623456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.077 [2024-07-20 18:08:55.655754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.077 [2024-07-20 18:08:55.656349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.077 [2024-07-20 18:08:55.656394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.077 [2024-07-20 18:08:55.689087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.077 [2024-07-20 18:08:55.689993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.077 [2024-07-20 18:08:55.690039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.077 [2024-07-20 18:08:55.724260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.077 [2024-07-20 18:08:55.724993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.077 [2024-07-20 18:08:55.725038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.077 [2024-07-20 18:08:55.755519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.077 [2024-07-20 18:08:55.756374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.077 [2024-07-20 18:08:55.756405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.077 [2024-07-20 18:08:55.791171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.077 [2024-07-20 18:08:55.792185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.077 [2024-07-20 18:08:55.792229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.077 [2024-07-20 18:08:55.823507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.077 [2024-07-20 18:08:55.824160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.077 [2024-07-20 18:08:55.824190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.077 [2024-07-20 18:08:55.854817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.077 [2024-07-20 18:08:55.855307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.077 [2024-07-20 18:08:55.855351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.335 [2024-07-20 18:08:55.889277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.335 [2024-07-20 18:08:55.890039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.335 [2024-07-20 18:08:55.890070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.335 [2024-07-20 18:08:55.920231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.335 [2024-07-20 18:08:55.920971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.335 [2024-07-20 18:08:55.921019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.335 [2024-07-20 18:08:55.951416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.335 [2024-07-20 18:08:55.952043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.335 [2024-07-20 18:08:55.952074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.335 [2024-07-20 18:08:55.986308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.335 [2024-07-20 18:08:55.987374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.335 [2024-07-20 18:08:55.987406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.335 [2024-07-20 18:08:56.020320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.335 [2024-07-20 18:08:56.021051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.335 [2024-07-20 18:08:56.021082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.335 [2024-07-20 18:08:56.054698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.335 [2024-07-20 18:08:56.055303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.335 [2024-07-20 18:08:56.055333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.335 [2024-07-20 18:08:56.089785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.335 [2024-07-20 18:08:56.090383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.335 [2024-07-20 18:08:56.090427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.335 [2024-07-20 18:08:56.123910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.335 [2024-07-20 18:08:56.124513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.335 [2024-07-20 18:08:56.124543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.600 [2024-07-20 18:08:56.158423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.600 [2024-07-20 18:08:56.159057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.600 [2024-07-20 18:08:56.159089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.600 [2024-07-20 18:08:56.189714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.600 [2024-07-20 18:08:56.190593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.600 [2024-07-20 18:08:56.190639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.600 [2024-07-20 18:08:56.220827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.600 [2024-07-20 18:08:56.221685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.600 [2024-07-20 18:08:56.221715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.600 [2024-07-20 18:08:56.256565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.600 [2024-07-20 18:08:56.257562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.600 [2024-07-20 18:08:56.257593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.600 [2024-07-20 18:08:56.291716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.600 [2024-07-20 18:08:56.292471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.600 [2024-07-20 18:08:56.292501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.600 [2024-07-20 18:08:56.326426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.600 [2024-07-20 18:08:56.327221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.600 [2024-07-20 18:08:56.327252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.600 [2024-07-20 18:08:56.361183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.600 [2024-07-20 18:08:56.361943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.600 [2024-07-20 18:08:56.361974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.600 [2024-07-20 18:08:56.392929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.600 [2024-07-20 18:08:56.393692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.600 [2024-07-20 18:08:56.393723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.865 [2024-07-20 18:08:56.426519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.865 [2024-07-20 18:08:56.427279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.865 [2024-07-20 18:08:56.427311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.865 [2024-07-20 18:08:56.461253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.865 [2024-07-20 18:08:56.462211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.865 [2024-07-20 18:08:56.462242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.865 [2024-07-20 18:08:56.495089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.865 [2024-07-20 18:08:56.495974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.865 [2024-07-20 18:08:56.496003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.865 [2024-07-20 18:08:56.531688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.865 [2024-07-20 18:08:56.532487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.865 [2024-07-20 18:08:56.532515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.865 [2024-07-20 18:08:56.567515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.865 [2024-07-20 18:08:56.568602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.865 [2024-07-20 18:08:56.568631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.865 [2024-07-20 18:08:56.600679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.865 [2024-07-20 18:08:56.601543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.865 [2024-07-20 18:08:56.601572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.865 [2024-07-20 18:08:56.634815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:21.865 [2024-07-20 18:08:56.635631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.865 [2024-07-20 18:08:56.635658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:22.123 [2024-07-20 18:08:56.667546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:22.123 [2024-07-20 18:08:56.668289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.123 [2024-07-20 18:08:56.668317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:22.123 [2024-07-20 18:08:56.702661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:22.123 [2024-07-20 18:08:56.703399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.123 [2024-07-20 18:08:56.703427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.123 [2024-07-20 18:08:56.735933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:22.123 [2024-07-20 18:08:56.736677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.123 [2024-07-20 18:08:56.736704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:22.123 [2024-07-20 18:08:56.767300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:22.123 [2024-07-20 18:08:56.767950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.123 [2024-07-20 18:08:56.768001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:22.123 [2024-07-20 18:08:56.801665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:22.123 [2024-07-20 18:08:56.802425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.123 [2024-07-20 18:08:56.802453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:22.123 [2024-07-20 18:08:56.837232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:22.123 [2024-07-20 18:08:56.838212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.123 [2024-07-20 18:08:56.838239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.123 [2024-07-20 18:08:56.871534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:22.123 [2024-07-20 18:08:56.872311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.123 [2024-07-20 18:08:56.872354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:22.123 [2024-07-20 18:08:56.905975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:22.123 [2024-07-20 18:08:56.906935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.123 [2024-07-20 18:08:56.906963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:22.381 [2024-07-20 18:08:56.935926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4dc50) with pdu=0x2000190fef90 00:33:22.381 [2024-07-20 18:08:56.936628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.381 [2024-07-20 18:08:56.936655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:22.381 00:33:22.381 Latency(us) 00:33:22.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.381 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:22.381 nvme0n1 : 2.02 920.18 115.02 0.00 0.00 17303.50 11747.93 37865.24 00:33:22.381 =================================================================================================================== 00:33:22.381 Total : 920.18 115.02 0.00 0.00 17303.50 11747.93 37865.24 00:33:22.381 0 00:33:22.381 18:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:22.381 18:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:22.381 18:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:22.381 | .driver_specific 00:33:22.381 | .nvme_error 00:33:22.381 | .status_code 00:33:22.381 | .command_transient_transport_error' 00:33:22.381 18:08:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:22.639 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 59 > 0 )) 00:33:22.639 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1096575 00:33:22.639 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1096575 ']' 00:33:22.639 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1096575 00:33:22.639 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:22.639 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:22.639 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1096575 00:33:22.639 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:22.639 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:22.639 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1096575' 00:33:22.639 killing process with pid 1096575 00:33:22.639 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1096575 00:33:22.639 Received shutdown signal, test time was about 2.000000 seconds 00:33:22.639 00:33:22.639 Latency(us) 00:33:22.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.639 =================================================================================================================== 00:33:22.639 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:22.639 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1096575 00:33:22.897 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1095210 00:33:22.897 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1095210 ']' 00:33:22.897 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1095210 00:33:22.897 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:22.897 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:22.897 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1095210 00:33:22.897 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:22.897 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:22.897 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1095210' 00:33:22.897 killing process with pid 1095210 00:33:22.898 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1095210 00:33:22.898 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1095210 00:33:23.156 00:33:23.156 real 0m15.084s 00:33:23.156 user 0m30.471s 00:33:23.156 sys 0m3.835s 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:23.156 ************************************ 00:33:23.156 END TEST nvmf_digest_error 00:33:23.156 ************************************ 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:23.156 rmmod nvme_tcp 00:33:23.156 rmmod nvme_fabrics 00:33:23.156 rmmod nvme_keyring 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1095210 ']' 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1095210 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 1095210 ']' 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 1095210 00:33:23.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1095210) - No such process 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 1095210 is not found' 00:33:23.156 Process with pid 1095210 is not found 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:23.156 18:08:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.683 18:08:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:25.683 00:33:25.683 real 0m34.525s 00:33:25.683 user 1m1.780s 00:33:25.683 sys 0m9.139s 00:33:25.683 18:08:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:25.683 18:08:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:25.683 ************************************ 00:33:25.683 END TEST nvmf_digest 00:33:25.683 ************************************ 00:33:25.683 18:08:59 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:25.683 18:08:59 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:25.683 18:08:59 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:25.683 18:08:59 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:25.683 18:08:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:25.683 18:08:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:25.683 18:08:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:25.683 ************************************ 00:33:25.683 START TEST nvmf_bdevperf 00:33:25.683 ************************************ 00:33:25.683 18:08:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:25.683 * Looking for test storage... 00:33:25.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.683 18:09:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:25.684 18:09:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:27.058 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:27.058 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:27.058 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:27.058 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:27.058 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:27.059 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:27.059 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:27.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:27.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:33:27.317 00:33:27.317 --- 10.0.0.2 ping statistics --- 00:33:27.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.317 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:27.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:27.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:33:27.317 00:33:27.317 --- 10.0.0.1 ping statistics --- 00:33:27.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.317 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:27.317 18:09:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1099032 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1099032 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1099032 ']' 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:27.317 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.317 [2024-07-20 18:09:02.059220] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:27.317 [2024-07-20 18:09:02.059300] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:27.317 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.574 [2024-07-20 18:09:02.134045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:27.574 [2024-07-20 18:09:02.226400] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:27.574 [2024-07-20 18:09:02.226461] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:27.574 [2024-07-20 18:09:02.226478] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:27.574 [2024-07-20 18:09:02.226491] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:27.574 [2024-07-20 18:09:02.226503] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:27.574 [2024-07-20 18:09:02.226586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:27.574 [2024-07-20 18:09:02.226627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:27.574 [2024-07-20 18:09:02.226631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.574 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:27.574 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:27.574 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:27.574 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:27.574 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.574 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:27.574 18:09:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:27.574 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.575 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.575 [2024-07-20 18:09:02.352532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.575 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.575 18:09:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:27.575 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.575 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.833 Malloc0 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:27.833 [2024-07-20 18:09:02.416272] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:27.833 { 00:33:27.833 "params": { 00:33:27.833 "name": "Nvme$subsystem", 00:33:27.833 "trtype": "$TEST_TRANSPORT", 00:33:27.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:27.833 "adrfam": "ipv4", 00:33:27.833 "trsvcid": "$NVMF_PORT", 00:33:27.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:27.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:27.833 "hdgst": ${hdgst:-false}, 00:33:27.833 "ddgst": ${ddgst:-false} 00:33:27.833 }, 00:33:27.833 "method": "bdev_nvme_attach_controller" 00:33:27.833 } 00:33:27.833 EOF 00:33:27.833 )") 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:27.833 18:09:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:27.833 "params": { 00:33:27.833 "name": "Nvme1", 00:33:27.833 "trtype": "tcp", 00:33:27.833 "traddr": "10.0.0.2", 00:33:27.833 "adrfam": "ipv4", 00:33:27.833 "trsvcid": "4420", 00:33:27.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:27.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:27.833 "hdgst": false, 00:33:27.833 "ddgst": false 00:33:27.833 }, 00:33:27.833 "method": "bdev_nvme_attach_controller" 00:33:27.833 }' 00:33:27.833 [2024-07-20 18:09:02.461702] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:27.833 [2024-07-20 18:09:02.461809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1099064 ] 00:33:27.833 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.833 [2024-07-20 18:09:02.522454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.833 [2024-07-20 18:09:02.607818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.397 Running I/O for 1 seconds... 00:33:29.326 00:33:29.326 Latency(us) 00:33:29.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.326 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:29.326 Verification LBA range: start 0x0 length 0x4000 00:33:29.326 Nvme1n1 : 1.01 8908.83 34.80 0.00 0.00 14295.66 2949.12 21262.79 00:33:29.326 =================================================================================================================== 00:33:29.326 Total : 8908.83 34.80 0.00 0.00 14295.66 2949.12 21262.79 00:33:29.584 18:09:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1099318 00:33:29.584 18:09:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:29.584 18:09:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:29.584 18:09:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:29.584 18:09:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:29.584 18:09:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:29.584 18:09:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:29.584 18:09:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:29.584 { 00:33:29.584 "params": { 00:33:29.584 "name": "Nvme$subsystem", 00:33:29.584 "trtype": "$TEST_TRANSPORT", 00:33:29.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:29.584 "adrfam": "ipv4", 00:33:29.584 "trsvcid": "$NVMF_PORT", 00:33:29.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:29.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:29.584 "hdgst": ${hdgst:-false}, 00:33:29.584 "ddgst": ${ddgst:-false} 00:33:29.584 }, 00:33:29.584 "method": "bdev_nvme_attach_controller" 00:33:29.584 } 00:33:29.584 EOF 00:33:29.584 )") 00:33:29.584 18:09:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:29.584 18:09:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:29.584 18:09:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:29.584 18:09:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:29.584 "params": { 00:33:29.584 "name": "Nvme1", 00:33:29.584 "trtype": "tcp", 00:33:29.584 "traddr": "10.0.0.2", 00:33:29.584 "adrfam": "ipv4", 00:33:29.584 "trsvcid": "4420", 00:33:29.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:29.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:29.584 "hdgst": false, 00:33:29.584 "ddgst": false 00:33:29.584 }, 00:33:29.584 "method": "bdev_nvme_attach_controller" 00:33:29.584 }' 00:33:29.584 [2024-07-20 18:09:04.183033] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:29.584 [2024-07-20 18:09:04.183147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1099318 ] 00:33:29.584 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.584 [2024-07-20 18:09:04.245141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.584 [2024-07-20 18:09:04.330153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.848 Running I/O for 15 seconds... 00:33:32.429 18:09:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1099032 00:33:32.429 18:09:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:32.429 [2024-07-20 18:09:07.152045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.152973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.152989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.430 [2024-07-20 18:09:07.153003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.430 [2024-07-20 18:09:07.153034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.430 [2024-07-20 18:09:07.153064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.430 [2024-07-20 18:09:07.153114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.430 [2024-07-20 18:09:07.153147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.430 [2024-07-20 18:09:07.153180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.430 [2024-07-20 18:09:07.153213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.430 [2024-07-20 18:09:07.153246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.153280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.153313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.153346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.153382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.153417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.153450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.153482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.153516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.430 [2024-07-20 18:09:07.153549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.430 [2024-07-20 18:09:07.153566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.153582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.153599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.153615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.153632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.153648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.153665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.153681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.153698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.153714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.153731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.153747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.153764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.153789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.153819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.153851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.153868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.153882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.153898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.153912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.153927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.153941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.153957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.153971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.153987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.431 [2024-07-20 18:09:07.154930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.431 [2024-07-20 18:09:07.154960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.154975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.431 [2024-07-20 18:09:07.154989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.431 [2024-07-20 18:09:07.155005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.155763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.432 [2024-07-20 18:09:07.155815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.432 [2024-07-20 18:09:07.155873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.432 [2024-07-20 18:09:07.155903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.432 [2024-07-20 18:09:07.155933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.432 [2024-07-20 18:09:07.155963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.155979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.432 [2024-07-20 18:09:07.155993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.432 [2024-07-20 18:09:07.156023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.432 [2024-07-20 18:09:07.156058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.156108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.156135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.156181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.156213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.156246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.156279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.156313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:32.432 [2024-07-20 18:09:07.156347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.432 [2024-07-20 18:09:07.156380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.432 [2024-07-20 18:09:07.156412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.432 [2024-07-20 18:09:07.156445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.432 [2024-07-20 18:09:07.156481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.432 [2024-07-20 18:09:07.156499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.433 [2024-07-20 18:09:07.156515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.433 [2024-07-20 18:09:07.156532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.433 [2024-07-20 18:09:07.156548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.433 [2024-07-20 18:09:07.156577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.433 [2024-07-20 18:09:07.156592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.433 [2024-07-20 18:09:07.156609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf43150 is same with the state(5) to be set 00:33:32.433 [2024-07-20 18:09:07.156627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:32.433 [2024-07-20 18:09:07.156640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:32.433 [2024-07-20 18:09:07.156653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54744 len:8 PRP1 0x0 PRP2 0x0 00:33:32.433 [2024-07-20 18:09:07.156668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.433 [2024-07-20 18:09:07.156729] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf43150 was disconnected and freed. reset controller. 00:33:32.433 [2024-07-20 18:09:07.156818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.433 [2024-07-20 18:09:07.156855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.433 [2024-07-20 18:09:07.156871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.433 [2024-07-20 18:09:07.156884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.433 [2024-07-20 18:09:07.156898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.433 [2024-07-20 18:09:07.156912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.433 [2024-07-20 18:09:07.156926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.433 [2024-07-20 18:09:07.156939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.433 [2024-07-20 18:09:07.156952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.433 [2024-07-20 18:09:07.160746] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.433 [2024-07-20 18:09:07.160806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.433 [2024-07-20 18:09:07.161824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.433 [2024-07-20 18:09:07.161875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.433 [2024-07-20 18:09:07.161893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.433 [2024-07-20 18:09:07.162128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.433 [2024-07-20 18:09:07.162398] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.433 [2024-07-20 18:09:07.162423] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.433 [2024-07-20 18:09:07.162442] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.433 [2024-07-20 18:09:07.166037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.433 [2024-07-20 18:09:07.174856] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.433 [2024-07-20 18:09:07.175368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.433 [2024-07-20 18:09:07.175400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.433 [2024-07-20 18:09:07.175418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.433 [2024-07-20 18:09:07.175657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.433 [2024-07-20 18:09:07.175938] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.433 [2024-07-20 18:09:07.175964] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.433 [2024-07-20 18:09:07.175980] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.433 [2024-07-20 18:09:07.179545] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.433 [2024-07-20 18:09:07.188812] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.433 [2024-07-20 18:09:07.189348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.433 [2024-07-20 18:09:07.189382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.433 [2024-07-20 18:09:07.189401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.433 [2024-07-20 18:09:07.189641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.433 [2024-07-20 18:09:07.189914] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.433 [2024-07-20 18:09:07.189941] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.433 [2024-07-20 18:09:07.189957] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.433 [2024-07-20 18:09:07.193526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.433 [2024-07-20 18:09:07.202787] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.433 [2024-07-20 18:09:07.203340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.433 [2024-07-20 18:09:07.203372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.433 [2024-07-20 18:09:07.203390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.433 [2024-07-20 18:09:07.203629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.433 [2024-07-20 18:09:07.203886] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.433 [2024-07-20 18:09:07.203911] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.433 [2024-07-20 18:09:07.203933] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.433 [2024-07-20 18:09:07.207496] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.433 [2024-07-20 18:09:07.216763] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.433 [2024-07-20 18:09:07.217317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.433 [2024-07-20 18:09:07.217345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.433 [2024-07-20 18:09:07.217360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.433 [2024-07-20 18:09:07.217602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.433 [2024-07-20 18:09:07.217859] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.433 [2024-07-20 18:09:07.217885] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.433 [2024-07-20 18:09:07.217902] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.433 [2024-07-20 18:09:07.221464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.693 [2024-07-20 18:09:07.230731] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.693 [2024-07-20 18:09:07.231267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-07-20 18:09:07.231304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.693 [2024-07-20 18:09:07.231322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.693 [2024-07-20 18:09:07.231566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.693 [2024-07-20 18:09:07.231820] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.693 [2024-07-20 18:09:07.231845] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.693 [2024-07-20 18:09:07.231862] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.693 [2024-07-20 18:09:07.235424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.693 [2024-07-20 18:09:07.244582] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.693 [2024-07-20 18:09:07.245091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-07-20 18:09:07.245119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.693 [2024-07-20 18:09:07.245134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.693 [2024-07-20 18:09:07.245396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.693 [2024-07-20 18:09:07.245651] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.693 [2024-07-20 18:09:07.245675] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.693 [2024-07-20 18:09:07.245692] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.693 [2024-07-20 18:09:07.249295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.693 [2024-07-20 18:09:07.258556] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.693 [2024-07-20 18:09:07.259042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-07-20 18:09:07.259093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.693 [2024-07-20 18:09:07.259113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.693 [2024-07-20 18:09:07.259352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.693 [2024-07-20 18:09:07.259595] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.693 [2024-07-20 18:09:07.259620] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.693 [2024-07-20 18:09:07.259637] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.693 [2024-07-20 18:09:07.263244] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.693 [2024-07-20 18:09:07.272415] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.693 [2024-07-20 18:09:07.272938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-07-20 18:09:07.272968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.693 [2024-07-20 18:09:07.272985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.693 [2024-07-20 18:09:07.273209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.693 [2024-07-20 18:09:07.273468] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.693 [2024-07-20 18:09:07.273493] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.693 [2024-07-20 18:09:07.273510] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.693 [2024-07-20 18:09:07.277041] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.693 [2024-07-20 18:09:07.286356] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.693 [2024-07-20 18:09:07.286954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-07-20 18:09:07.286984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.693 [2024-07-20 18:09:07.287001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.693 [2024-07-20 18:09:07.287253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.693 [2024-07-20 18:09:07.287517] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.693 [2024-07-20 18:09:07.287555] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.693 [2024-07-20 18:09:07.287570] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.693 [2024-07-20 18:09:07.291116] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.693 [2024-07-20 18:09:07.300213] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.693 [2024-07-20 18:09:07.300746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-07-20 18:09:07.300779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.693 [2024-07-20 18:09:07.300804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.693 [2024-07-20 18:09:07.301021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.693 [2024-07-20 18:09:07.301294] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.693 [2024-07-20 18:09:07.301320] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.693 [2024-07-20 18:09:07.301337] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.693 [2024-07-20 18:09:07.304672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.693 [2024-07-20 18:09:07.314125] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.693 [2024-07-20 18:09:07.314704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-07-20 18:09:07.314734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.693 [2024-07-20 18:09:07.314773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.693 [2024-07-20 18:09:07.315022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.693 [2024-07-20 18:09:07.315265] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.693 [2024-07-20 18:09:07.315300] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.693 [2024-07-20 18:09:07.315318] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.693 [2024-07-20 18:09:07.318926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.693 [2024-07-20 18:09:07.327922] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.693 [2024-07-20 18:09:07.328474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-07-20 18:09:07.328522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.693 [2024-07-20 18:09:07.328539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.693 [2024-07-20 18:09:07.328808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.693 [2024-07-20 18:09:07.329046] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.693 [2024-07-20 18:09:07.329069] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.693 [2024-07-20 18:09:07.329108] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.693 [2024-07-20 18:09:07.332619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.693 [2024-07-20 18:09:07.341970] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.693 [2024-07-20 18:09:07.342487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-07-20 18:09:07.342519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.693 [2024-07-20 18:09:07.342538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.693 [2024-07-20 18:09:07.342776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.693 [2024-07-20 18:09:07.343041] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.693 [2024-07-20 18:09:07.343064] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.693 [2024-07-20 18:09:07.343079] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.693 [2024-07-20 18:09:07.346662] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.693 [2024-07-20 18:09:07.355989] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.693 [2024-07-20 18:09:07.356586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-07-20 18:09:07.356637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.693 [2024-07-20 18:09:07.356655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.693 [2024-07-20 18:09:07.356915] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.693 [2024-07-20 18:09:07.357165] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.693 [2024-07-20 18:09:07.357190] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.693 [2024-07-20 18:09:07.357207] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.693 [2024-07-20 18:09:07.360788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.693 [2024-07-20 18:09:07.369958] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.693 [2024-07-20 18:09:07.370486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-07-20 18:09:07.370522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.694 [2024-07-20 18:09:07.370540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.694 [2024-07-20 18:09:07.370789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.694 [2024-07-20 18:09:07.371040] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.694 [2024-07-20 18:09:07.371063] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.694 [2024-07-20 18:09:07.371079] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.694 [2024-07-20 18:09:07.374589] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.694 [2024-07-20 18:09:07.383776] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.694 [2024-07-20 18:09:07.384324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-07-20 18:09:07.384356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.694 [2024-07-20 18:09:07.384374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.694 [2024-07-20 18:09:07.384613] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.694 [2024-07-20 18:09:07.384881] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.694 [2024-07-20 18:09:07.384904] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.694 [2024-07-20 18:09:07.384919] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.694 [2024-07-20 18:09:07.388454] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.694 [2024-07-20 18:09:07.397745] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.694 [2024-07-20 18:09:07.398258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-07-20 18:09:07.398286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.694 [2024-07-20 18:09:07.398310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.694 [2024-07-20 18:09:07.398553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.694 [2024-07-20 18:09:07.398747] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.694 [2024-07-20 18:09:07.398767] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.694 [2024-07-20 18:09:07.398808] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.694 [2024-07-20 18:09:07.402407] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.694 [2024-07-20 18:09:07.411731] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.694 [2024-07-20 18:09:07.412257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-07-20 18:09:07.412302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.694 [2024-07-20 18:09:07.412318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.694 [2024-07-20 18:09:07.412586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.694 [2024-07-20 18:09:07.412833] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.694 [2024-07-20 18:09:07.412863] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.694 [2024-07-20 18:09:07.412878] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.694 [2024-07-20 18:09:07.416495] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.694 [2024-07-20 18:09:07.425813] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.694 [2024-07-20 18:09:07.426327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-07-20 18:09:07.426358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.694 [2024-07-20 18:09:07.426377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.694 [2024-07-20 18:09:07.426615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.694 [2024-07-20 18:09:07.426881] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.694 [2024-07-20 18:09:07.426905] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.694 [2024-07-20 18:09:07.426920] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.694 [2024-07-20 18:09:07.430518] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.694 [2024-07-20 18:09:07.439658] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.694 [2024-07-20 18:09:07.440102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-07-20 18:09:07.440132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.694 [2024-07-20 18:09:07.440149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.694 [2024-07-20 18:09:07.440410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.694 [2024-07-20 18:09:07.440654] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.694 [2024-07-20 18:09:07.440684] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.694 [2024-07-20 18:09:07.440702] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.694 [2024-07-20 18:09:07.444242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.694 [2024-07-20 18:09:07.453609] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.694 [2024-07-20 18:09:07.454091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-07-20 18:09:07.454138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.694 [2024-07-20 18:09:07.454158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.694 [2024-07-20 18:09:07.454397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.694 [2024-07-20 18:09:07.454641] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.694 [2024-07-20 18:09:07.454670] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.694 [2024-07-20 18:09:07.454687] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.694 [2024-07-20 18:09:07.458304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.694 [2024-07-20 18:09:07.467689] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.694 [2024-07-20 18:09:07.468180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-07-20 18:09:07.468212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.694 [2024-07-20 18:09:07.468230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.694 [2024-07-20 18:09:07.468468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.694 [2024-07-20 18:09:07.468712] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.694 [2024-07-20 18:09:07.468736] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.694 [2024-07-20 18:09:07.468754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.694 [2024-07-20 18:09:07.472375] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.694 [2024-07-20 18:09:07.481740] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.694 [2024-07-20 18:09:07.482254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-07-20 18:09:07.482285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.694 [2024-07-20 18:09:07.482304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.694 [2024-07-20 18:09:07.482542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.694 [2024-07-20 18:09:07.482786] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.694 [2024-07-20 18:09:07.482820] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.694 [2024-07-20 18:09:07.482837] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.694 [2024-07-20 18:09:07.486474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.954 [2024-07-20 18:09:07.495755] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.954 [2024-07-20 18:09:07.496325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.954 [2024-07-20 18:09:07.496357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.954 [2024-07-20 18:09:07.496375] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.954 [2024-07-20 18:09:07.496614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.954 [2024-07-20 18:09:07.496867] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.954 [2024-07-20 18:09:07.496892] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.954 [2024-07-20 18:09:07.496909] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.954 [2024-07-20 18:09:07.500469] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.954 [2024-07-20 18:09:07.509724] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.954 [2024-07-20 18:09:07.510297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.954 [2024-07-20 18:09:07.510352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.954 [2024-07-20 18:09:07.510371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.954 [2024-07-20 18:09:07.510610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.954 [2024-07-20 18:09:07.510862] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.954 [2024-07-20 18:09:07.510887] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.954 [2024-07-20 18:09:07.510904] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.954 [2024-07-20 18:09:07.514466] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.954 [2024-07-20 18:09:07.523720] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.954 [2024-07-20 18:09:07.524227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.954 [2024-07-20 18:09:07.524255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.954 [2024-07-20 18:09:07.524271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.954 [2024-07-20 18:09:07.524528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.954 [2024-07-20 18:09:07.524770] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.954 [2024-07-20 18:09:07.524811] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.954 [2024-07-20 18:09:07.524831] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.954 [2024-07-20 18:09:07.528403] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.954 [2024-07-20 18:09:07.537681] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.954 [2024-07-20 18:09:07.538223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.954 [2024-07-20 18:09:07.538255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.954 [2024-07-20 18:09:07.538273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.954 [2024-07-20 18:09:07.538517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.954 [2024-07-20 18:09:07.538759] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.954 [2024-07-20 18:09:07.538785] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.954 [2024-07-20 18:09:07.538817] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.954 [2024-07-20 18:09:07.542388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.954 [2024-07-20 18:09:07.551672] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.954 [2024-07-20 18:09:07.552218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.954 [2024-07-20 18:09:07.552250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.954 [2024-07-20 18:09:07.552269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.954 [2024-07-20 18:09:07.552507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.954 [2024-07-20 18:09:07.552749] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.954 [2024-07-20 18:09:07.552774] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.954 [2024-07-20 18:09:07.552791] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.954 [2024-07-20 18:09:07.556374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.954 [2024-07-20 18:09:07.565642] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.954 [2024-07-20 18:09:07.566175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.954 [2024-07-20 18:09:07.566208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.954 [2024-07-20 18:09:07.566226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.954 [2024-07-20 18:09:07.566464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.954 [2024-07-20 18:09:07.566707] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.954 [2024-07-20 18:09:07.566733] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.954 [2024-07-20 18:09:07.566750] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.954 [2024-07-20 18:09:07.570326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.954 [2024-07-20 18:09:07.579605] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.954 [2024-07-20 18:09:07.580154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.954 [2024-07-20 18:09:07.580183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.954 [2024-07-20 18:09:07.580198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.954 [2024-07-20 18:09:07.580439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.954 [2024-07-20 18:09:07.580682] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.954 [2024-07-20 18:09:07.580708] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.954 [2024-07-20 18:09:07.580730] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.954 [2024-07-20 18:09:07.584312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.954 [2024-07-20 18:09:07.593590] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.954 [2024-07-20 18:09:07.594120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.954 [2024-07-20 18:09:07.594152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.954 [2024-07-20 18:09:07.594171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.954 [2024-07-20 18:09:07.594409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.954 [2024-07-20 18:09:07.594652] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.954 [2024-07-20 18:09:07.594677] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.954 [2024-07-20 18:09:07.594695] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.954 [2024-07-20 18:09:07.598273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.954 [2024-07-20 18:09:07.607564] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.954 [2024-07-20 18:09:07.608118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.954 [2024-07-20 18:09:07.608146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.954 [2024-07-20 18:09:07.608162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.954 [2024-07-20 18:09:07.608417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.954 [2024-07-20 18:09:07.608660] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.954 [2024-07-20 18:09:07.608685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.954 [2024-07-20 18:09:07.608702] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.954 [2024-07-20 18:09:07.612199] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.954 [2024-07-20 18:09:07.621456] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.954 [2024-07-20 18:09:07.621967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.954 [2024-07-20 18:09:07.621997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.954 [2024-07-20 18:09:07.622013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.954 [2024-07-20 18:09:07.622265] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.955 [2024-07-20 18:09:07.622508] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.955 [2024-07-20 18:09:07.622533] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.955 [2024-07-20 18:09:07.622550] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.955 [2024-07-20 18:09:07.626067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.955 [2024-07-20 18:09:07.635208] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.955 [2024-07-20 18:09:07.635733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.955 [2024-07-20 18:09:07.635770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.955 [2024-07-20 18:09:07.635789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.955 [2024-07-20 18:09:07.636035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.955 [2024-07-20 18:09:07.636297] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.955 [2024-07-20 18:09:07.636323] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.955 [2024-07-20 18:09:07.636340] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.955 [2024-07-20 18:09:07.639947] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.955 [2024-07-20 18:09:07.649127] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.955 [2024-07-20 18:09:07.649644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.955 [2024-07-20 18:09:07.649673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.955 [2024-07-20 18:09:07.649689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.955 [2024-07-20 18:09:07.649942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.955 [2024-07-20 18:09:07.650173] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.955 [2024-07-20 18:09:07.650210] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.955 [2024-07-20 18:09:07.650225] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.955 [2024-07-20 18:09:07.653805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.955 [2024-07-20 18:09:07.663144] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.955 [2024-07-20 18:09:07.663682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.955 [2024-07-20 18:09:07.663714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.955 [2024-07-20 18:09:07.663733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.955 [2024-07-20 18:09:07.663985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.955 [2024-07-20 18:09:07.664245] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.955 [2024-07-20 18:09:07.664271] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.955 [2024-07-20 18:09:07.664288] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.955 [2024-07-20 18:09:07.667840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.955 [2024-07-20 18:09:07.677118] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.955 [2024-07-20 18:09:07.677719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.955 [2024-07-20 18:09:07.677765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.955 [2024-07-20 18:09:07.677786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.955 [2024-07-20 18:09:07.678038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.955 [2024-07-20 18:09:07.678304] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.955 [2024-07-20 18:09:07.678330] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.955 [2024-07-20 18:09:07.678346] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.955 [2024-07-20 18:09:07.681961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.955 [2024-07-20 18:09:07.691174] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.955 [2024-07-20 18:09:07.691693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.955 [2024-07-20 18:09:07.691727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.955 [2024-07-20 18:09:07.691746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.955 [2024-07-20 18:09:07.691998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.955 [2024-07-20 18:09:07.692251] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.955 [2024-07-20 18:09:07.692277] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.955 [2024-07-20 18:09:07.692294] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.955 [2024-07-20 18:09:07.695911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.955 [2024-07-20 18:09:07.705056] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.955 [2024-07-20 18:09:07.705612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.955 [2024-07-20 18:09:07.705645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.955 [2024-07-20 18:09:07.705663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.955 [2024-07-20 18:09:07.705922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.955 [2024-07-20 18:09:07.706164] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.955 [2024-07-20 18:09:07.706190] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.955 [2024-07-20 18:09:07.706207] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.955 [2024-07-20 18:09:07.709797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.955 [2024-07-20 18:09:07.718934] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.955 [2024-07-20 18:09:07.719448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.955 [2024-07-20 18:09:07.719478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.955 [2024-07-20 18:09:07.719495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.955 [2024-07-20 18:09:07.719717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.955 [2024-07-20 18:09:07.719965] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.955 [2024-07-20 18:09:07.719989] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.955 [2024-07-20 18:09:07.720005] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.955 [2024-07-20 18:09:07.723230] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.955 [2024-07-20 18:09:07.732260] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.955 [2024-07-20 18:09:07.732757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.955 [2024-07-20 18:09:07.732785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.955 [2024-07-20 18:09:07.732825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.955 [2024-07-20 18:09:07.733061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.955 [2024-07-20 18:09:07.733260] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.955 [2024-07-20 18:09:07.733281] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.955 [2024-07-20 18:09:07.733294] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.955 [2024-07-20 18:09:07.736553] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.955 [2024-07-20 18:09:07.746146] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.955 [2024-07-20 18:09:07.746675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.955 [2024-07-20 18:09:07.746707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:32.955 [2024-07-20 18:09:07.746741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:32.955 [2024-07-20 18:09:07.746969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:32.955 [2024-07-20 18:09:07.747224] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.955 [2024-07-20 18:09:07.747245] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.955 [2024-07-20 18:09:07.747259] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.215 [2024-07-20 18:09:07.750761] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.215 [2024-07-20 18:09:07.760081] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.215 [2024-07-20 18:09:07.760682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.215 [2024-07-20 18:09:07.760714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.215 [2024-07-20 18:09:07.760733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.215 [2024-07-20 18:09:07.760986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.215 [2024-07-20 18:09:07.761229] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.215 [2024-07-20 18:09:07.761255] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.215 [2024-07-20 18:09:07.761272] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.215 [2024-07-20 18:09:07.764889] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.215 [2024-07-20 18:09:07.773970] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.215 [2024-07-20 18:09:07.774493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.215 [2024-07-20 18:09:07.774521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.215 [2024-07-20 18:09:07.774542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.215 [2024-07-20 18:09:07.774809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.215 [2024-07-20 18:09:07.775054] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.215 [2024-07-20 18:09:07.775080] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.215 [2024-07-20 18:09:07.775097] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.215 [2024-07-20 18:09:07.778586] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.215 [2024-07-20 18:09:07.787959] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.215 [2024-07-20 18:09:07.788475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.215 [2024-07-20 18:09:07.788507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.215 [2024-07-20 18:09:07.788526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.215 [2024-07-20 18:09:07.788765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.215 [2024-07-20 18:09:07.789017] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.215 [2024-07-20 18:09:07.789040] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.215 [2024-07-20 18:09:07.789056] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.215 [2024-07-20 18:09:07.792670] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.215 [2024-07-20 18:09:07.801881] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.215 [2024-07-20 18:09:07.802380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.215 [2024-07-20 18:09:07.802412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.215 [2024-07-20 18:09:07.802430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.215 [2024-07-20 18:09:07.802669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.215 [2024-07-20 18:09:07.802931] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.215 [2024-07-20 18:09:07.802954] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.215 [2024-07-20 18:09:07.802969] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.215 [2024-07-20 18:09:07.806583] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.215 [2024-07-20 18:09:07.815971] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.215 [2024-07-20 18:09:07.816517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.215 [2024-07-20 18:09:07.816550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.215 [2024-07-20 18:09:07.816569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.215 [2024-07-20 18:09:07.816819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.215 [2024-07-20 18:09:07.817059] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.215 [2024-07-20 18:09:07.817100] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.215 [2024-07-20 18:09:07.817115] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.215 [2024-07-20 18:09:07.820711] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.215 [2024-07-20 18:09:07.829918] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.215 [2024-07-20 18:09:07.830527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.215 [2024-07-20 18:09:07.830575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.215 [2024-07-20 18:09:07.830593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.215 [2024-07-20 18:09:07.830856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.215 [2024-07-20 18:09:07.831056] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.215 [2024-07-20 18:09:07.831101] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.215 [2024-07-20 18:09:07.831114] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.215 [2024-07-20 18:09:07.834633] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.215 [2024-07-20 18:09:07.843776] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.215 [2024-07-20 18:09:07.844320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.215 [2024-07-20 18:09:07.844348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.215 [2024-07-20 18:09:07.844378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.215 [2024-07-20 18:09:07.844575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.215 [2024-07-20 18:09:07.844846] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.215 [2024-07-20 18:09:07.844871] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.215 [2024-07-20 18:09:07.844888] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.215 [2024-07-20 18:09:07.848460] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.215 [2024-07-20 18:09:07.857743] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.215 [2024-07-20 18:09:07.858279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.215 [2024-07-20 18:09:07.858311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.215 [2024-07-20 18:09:07.858330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.215 [2024-07-20 18:09:07.858569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.215 [2024-07-20 18:09:07.858822] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.215 [2024-07-20 18:09:07.858848] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.215 [2024-07-20 18:09:07.858864] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.215 [2024-07-20 18:09:07.862425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.215 [2024-07-20 18:09:07.871683] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.216 [2024-07-20 18:09:07.872206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.216 [2024-07-20 18:09:07.872239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.216 [2024-07-20 18:09:07.872258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.216 [2024-07-20 18:09:07.872497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.216 [2024-07-20 18:09:07.872741] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.216 [2024-07-20 18:09:07.872766] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.216 [2024-07-20 18:09:07.872783] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.216 [2024-07-20 18:09:07.876358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.216 [2024-07-20 18:09:07.885618] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.216 [2024-07-20 18:09:07.886156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.216 [2024-07-20 18:09:07.886188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.216 [2024-07-20 18:09:07.886206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.216 [2024-07-20 18:09:07.886445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.216 [2024-07-20 18:09:07.886688] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.216 [2024-07-20 18:09:07.886713] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.216 [2024-07-20 18:09:07.886730] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.216 [2024-07-20 18:09:07.890301] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.216 [2024-07-20 18:09:07.899582] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.216 [2024-07-20 18:09:07.900154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.216 [2024-07-20 18:09:07.900205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.216 [2024-07-20 18:09:07.900223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.216 [2024-07-20 18:09:07.900462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.216 [2024-07-20 18:09:07.900704] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.216 [2024-07-20 18:09:07.900730] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.216 [2024-07-20 18:09:07.900746] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.216 [2024-07-20 18:09:07.904329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.216 [2024-07-20 18:09:07.913362] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.216 [2024-07-20 18:09:07.913874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.216 [2024-07-20 18:09:07.913904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.216 [2024-07-20 18:09:07.913921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.216 [2024-07-20 18:09:07.914187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.216 [2024-07-20 18:09:07.914431] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.216 [2024-07-20 18:09:07.914456] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.216 [2024-07-20 18:09:07.914473] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.216 [2024-07-20 18:09:07.918002] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.216 [2024-07-20 18:09:07.927228] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.216 [2024-07-20 18:09:07.927752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.216 [2024-07-20 18:09:07.927784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.216 [2024-07-20 18:09:07.927812] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.216 [2024-07-20 18:09:07.928048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.216 [2024-07-20 18:09:07.928305] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.216 [2024-07-20 18:09:07.928330] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.216 [2024-07-20 18:09:07.928346] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.216 [2024-07-20 18:09:07.931902] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.216 [2024-07-20 18:09:07.941082] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.216 [2024-07-20 18:09:07.941631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.216 [2024-07-20 18:09:07.941658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.216 [2024-07-20 18:09:07.941674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.216 [2024-07-20 18:09:07.941947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.216 [2024-07-20 18:09:07.942192] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.216 [2024-07-20 18:09:07.942217] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.216 [2024-07-20 18:09:07.942233] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.216 [2024-07-20 18:09:07.945813] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.216 [2024-07-20 18:09:07.955092] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.216 [2024-07-20 18:09:07.955623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.216 [2024-07-20 18:09:07.955655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.216 [2024-07-20 18:09:07.955673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.216 [2024-07-20 18:09:07.955925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.216 [2024-07-20 18:09:07.956167] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.216 [2024-07-20 18:09:07.956193] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.216 [2024-07-20 18:09:07.956215] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.216 [2024-07-20 18:09:07.959785] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.216 [2024-07-20 18:09:07.969067] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.216 [2024-07-20 18:09:07.969706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.216 [2024-07-20 18:09:07.969754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.216 [2024-07-20 18:09:07.969772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.216 [2024-07-20 18:09:07.970020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.216 [2024-07-20 18:09:07.970264] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.216 [2024-07-20 18:09:07.970288] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.216 [2024-07-20 18:09:07.970305] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.216 [2024-07-20 18:09:07.973882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.216 [2024-07-20 18:09:07.982957] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.216 [2024-07-20 18:09:07.983581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.216 [2024-07-20 18:09:07.983612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.216 [2024-07-20 18:09:07.983631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.216 [2024-07-20 18:09:07.983882] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.216 [2024-07-20 18:09:07.984125] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.216 [2024-07-20 18:09:07.984150] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.216 [2024-07-20 18:09:07.984167] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.216 [2024-07-20 18:09:07.987735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.216 [2024-07-20 18:09:07.996819] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.216 [2024-07-20 18:09:07.997591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.216 [2024-07-20 18:09:07.997644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.216 [2024-07-20 18:09:07.997662] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.216 [2024-07-20 18:09:07.997911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.216 [2024-07-20 18:09:07.998155] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.216 [2024-07-20 18:09:07.998180] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.216 [2024-07-20 18:09:07.998197] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.216 [2024-07-20 18:09:08.001765] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.476 [2024-07-20 18:09:08.010917] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.476 [2024-07-20 18:09:08.011494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.476 [2024-07-20 18:09:08.011531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.476 [2024-07-20 18:09:08.011551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.476 [2024-07-20 18:09:08.011791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.476 [2024-07-20 18:09:08.012048] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.476 [2024-07-20 18:09:08.012073] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.476 [2024-07-20 18:09:08.012090] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.476 [2024-07-20 18:09:08.015656] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.476 [2024-07-20 18:09:08.024947] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.476 [2024-07-20 18:09:08.025471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.476 [2024-07-20 18:09:08.025503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.476 [2024-07-20 18:09:08.025521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.476 [2024-07-20 18:09:08.025760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.476 [2024-07-20 18:09:08.026013] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.476 [2024-07-20 18:09:08.026039] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.476 [2024-07-20 18:09:08.026056] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.476 [2024-07-20 18:09:08.029617] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.476 [2024-07-20 18:09:08.038889] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.476 [2024-07-20 18:09:08.039429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.476 [2024-07-20 18:09:08.039461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.476 [2024-07-20 18:09:08.039479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.476 [2024-07-20 18:09:08.039718] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.476 [2024-07-20 18:09:08.039974] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.476 [2024-07-20 18:09:08.040000] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.476 [2024-07-20 18:09:08.040017] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.476 [2024-07-20 18:09:08.043581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.476 [2024-07-20 18:09:08.052852] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.476 [2024-07-20 18:09:08.053375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.476 [2024-07-20 18:09:08.053407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.476 [2024-07-20 18:09:08.053424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.476 [2024-07-20 18:09:08.053663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.476 [2024-07-20 18:09:08.053923] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.476 [2024-07-20 18:09:08.053949] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.476 [2024-07-20 18:09:08.053965] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.476 [2024-07-20 18:09:08.057532] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.476 [2024-07-20 18:09:08.066810] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.476 [2024-07-20 18:09:08.067313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.476 [2024-07-20 18:09:08.067345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.476 [2024-07-20 18:09:08.067364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.476 [2024-07-20 18:09:08.067603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.476 [2024-07-20 18:09:08.067859] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.476 [2024-07-20 18:09:08.067885] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.476 [2024-07-20 18:09:08.067901] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.476 [2024-07-20 18:09:08.071463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.476 [2024-07-20 18:09:08.080730] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.476 [2024-07-20 18:09:08.081261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.476 [2024-07-20 18:09:08.081293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.476 [2024-07-20 18:09:08.081311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.476 [2024-07-20 18:09:08.081550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.476 [2024-07-20 18:09:08.081805] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.476 [2024-07-20 18:09:08.081831] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.476 [2024-07-20 18:09:08.081849] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.476 [2024-07-20 18:09:08.085415] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.476 [2024-07-20 18:09:08.094676] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.476 [2024-07-20 18:09:08.095203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.477 [2024-07-20 18:09:08.095235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.477 [2024-07-20 18:09:08.095253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.477 [2024-07-20 18:09:08.095491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.477 [2024-07-20 18:09:08.095734] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.477 [2024-07-20 18:09:08.095760] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.477 [2024-07-20 18:09:08.095776] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.477 [2024-07-20 18:09:08.099355] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.477 [2024-07-20 18:09:08.108638] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.477 [2024-07-20 18:09:08.109204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.477 [2024-07-20 18:09:08.109259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.477 [2024-07-20 18:09:08.109278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.477 [2024-07-20 18:09:08.109518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.477 [2024-07-20 18:09:08.109772] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.477 [2024-07-20 18:09:08.109808] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.477 [2024-07-20 18:09:08.109837] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.477 [2024-07-20 18:09:08.113098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.477 [2024-07-20 18:09:08.122696] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.477 [2024-07-20 18:09:08.123197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.477 [2024-07-20 18:09:08.123225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.477 [2024-07-20 18:09:08.123240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.477 [2024-07-20 18:09:08.123471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.477 [2024-07-20 18:09:08.123730] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.477 [2024-07-20 18:09:08.123755] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.477 [2024-07-20 18:09:08.123772] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.477 [2024-07-20 18:09:08.127379] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.477 [2024-07-20 18:09:08.136706] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.477 [2024-07-20 18:09:08.137253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.477 [2024-07-20 18:09:08.137280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.477 [2024-07-20 18:09:08.137296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.477 [2024-07-20 18:09:08.137501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.477 [2024-07-20 18:09:08.137694] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.477 [2024-07-20 18:09:08.137715] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.477 [2024-07-20 18:09:08.137728] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.477 [2024-07-20 18:09:08.141340] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.477 [2024-07-20 18:09:08.150710] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.477 [2024-07-20 18:09:08.151204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.477 [2024-07-20 18:09:08.151235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.477 [2024-07-20 18:09:08.151259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.477 [2024-07-20 18:09:08.151498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.477 [2024-07-20 18:09:08.151742] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.477 [2024-07-20 18:09:08.151766] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.477 [2024-07-20 18:09:08.151783] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.477 [2024-07-20 18:09:08.155391] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.477 [2024-07-20 18:09:08.164760] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.477 [2024-07-20 18:09:08.165278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.477 [2024-07-20 18:09:08.165328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.477 [2024-07-20 18:09:08.165346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.477 [2024-07-20 18:09:08.165585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.477 [2024-07-20 18:09:08.165855] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.477 [2024-07-20 18:09:08.165878] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.477 [2024-07-20 18:09:08.165893] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.477 [2024-07-20 18:09:08.169479] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.477 [2024-07-20 18:09:08.178818] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.477 [2024-07-20 18:09:08.179424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.477 [2024-07-20 18:09:08.179488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.477 [2024-07-20 18:09:08.179508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.477 [2024-07-20 18:09:08.179755] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.477 [2024-07-20 18:09:08.180010] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.477 [2024-07-20 18:09:08.180034] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.477 [2024-07-20 18:09:08.180051] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.477 [2024-07-20 18:09:08.183635] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.477 [2024-07-20 18:09:08.192674] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.477 [2024-07-20 18:09:08.193199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.477 [2024-07-20 18:09:08.193233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.477 [2024-07-20 18:09:08.193252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.477 [2024-07-20 18:09:08.193491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.477 [2024-07-20 18:09:08.193736] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.477 [2024-07-20 18:09:08.193767] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.477 [2024-07-20 18:09:08.193785] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.477 [2024-07-20 18:09:08.197360] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.477 [2024-07-20 18:09:08.206620] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.477 [2024-07-20 18:09:08.207154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.477 [2024-07-20 18:09:08.207187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.478 [2024-07-20 18:09:08.207205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.478 [2024-07-20 18:09:08.207445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.478 [2024-07-20 18:09:08.207688] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.478 [2024-07-20 18:09:08.207713] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.478 [2024-07-20 18:09:08.207730] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.478 [2024-07-20 18:09:08.211311] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.478 [2024-07-20 18:09:08.220586] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.478 [2024-07-20 18:09:08.221100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.478 [2024-07-20 18:09:08.221132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.478 [2024-07-20 18:09:08.221151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.478 [2024-07-20 18:09:08.221390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.478 [2024-07-20 18:09:08.221633] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.478 [2024-07-20 18:09:08.221659] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.478 [2024-07-20 18:09:08.221675] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.478 [2024-07-20 18:09:08.225259] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.478 [2024-07-20 18:09:08.234550] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.478 [2024-07-20 18:09:08.235093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.478 [2024-07-20 18:09:08.235126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.478 [2024-07-20 18:09:08.235145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.478 [2024-07-20 18:09:08.235385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.478 [2024-07-20 18:09:08.235628] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.478 [2024-07-20 18:09:08.235653] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.478 [2024-07-20 18:09:08.235670] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.478 [2024-07-20 18:09:08.239265] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.478 [2024-07-20 18:09:08.248539] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.478 [2024-07-20 18:09:08.249074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.478 [2024-07-20 18:09:08.249107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.478 [2024-07-20 18:09:08.249126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.478 [2024-07-20 18:09:08.249365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.478 [2024-07-20 18:09:08.249608] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.478 [2024-07-20 18:09:08.249633] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.478 [2024-07-20 18:09:08.249649] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.478 [2024-07-20 18:09:08.253227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.478 [2024-07-20 18:09:08.262490] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.478 [2024-07-20 18:09:08.263017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.478 [2024-07-20 18:09:08.263050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.478 [2024-07-20 18:09:08.263068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.478 [2024-07-20 18:09:08.263307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.478 [2024-07-20 18:09:08.263550] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.478 [2024-07-20 18:09:08.263575] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.478 [2024-07-20 18:09:08.263592] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.478 [2024-07-20 18:09:08.267172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.738 [2024-07-20 18:09:08.276440] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.738 [2024-07-20 18:09:08.276971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.738 [2024-07-20 18:09:08.277003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.738 [2024-07-20 18:09:08.277022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.738 [2024-07-20 18:09:08.277261] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.738 [2024-07-20 18:09:08.277503] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.738 [2024-07-20 18:09:08.277528] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.738 [2024-07-20 18:09:08.277545] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.738 [2024-07-20 18:09:08.281124] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.738 [2024-07-20 18:09:08.290389] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.738 [2024-07-20 18:09:08.290966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.738 [2024-07-20 18:09:08.290998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.738 [2024-07-20 18:09:08.291017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.738 [2024-07-20 18:09:08.291262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.738 [2024-07-20 18:09:08.291505] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.738 [2024-07-20 18:09:08.291530] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.738 [2024-07-20 18:09:08.291547] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.738 [2024-07-20 18:09:08.295130] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.738 [2024-07-20 18:09:08.304399] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.738 [2024-07-20 18:09:08.304921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.738 [2024-07-20 18:09:08.304953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.738 [2024-07-20 18:09:08.304972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.738 [2024-07-20 18:09:08.305211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.738 [2024-07-20 18:09:08.305454] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.738 [2024-07-20 18:09:08.305479] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.738 [2024-07-20 18:09:08.305495] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.738 [2024-07-20 18:09:08.309073] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.738 [2024-07-20 18:09:08.318334] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.738 [2024-07-20 18:09:08.318833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.738 [2024-07-20 18:09:08.318865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.738 [2024-07-20 18:09:08.318883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.738 [2024-07-20 18:09:08.319123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.738 [2024-07-20 18:09:08.319366] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.738 [2024-07-20 18:09:08.319391] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.738 [2024-07-20 18:09:08.319408] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.738 [2024-07-20 18:09:08.322985] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.738 [2024-07-20 18:09:08.332249] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.738 [2024-07-20 18:09:08.332785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.738 [2024-07-20 18:09:08.332827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.738 [2024-07-20 18:09:08.332847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.738 [2024-07-20 18:09:08.333087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.738 [2024-07-20 18:09:08.333329] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.738 [2024-07-20 18:09:08.333354] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.738 [2024-07-20 18:09:08.333377] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.738 [2024-07-20 18:09:08.336955] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.738 [2024-07-20 18:09:08.346226] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.738 [2024-07-20 18:09:08.346761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.738 [2024-07-20 18:09:08.346805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.738 [2024-07-20 18:09:08.346826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.738 [2024-07-20 18:09:08.347066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.738 [2024-07-20 18:09:08.347309] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.738 [2024-07-20 18:09:08.347334] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.738 [2024-07-20 18:09:08.347350] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.738 [2024-07-20 18:09:08.350931] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.738 [2024-07-20 18:09:08.360213] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.738 [2024-07-20 18:09:08.360747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.738 [2024-07-20 18:09:08.360778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.738 [2024-07-20 18:09:08.360804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.738 [2024-07-20 18:09:08.361046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.738 [2024-07-20 18:09:08.361290] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.738 [2024-07-20 18:09:08.361315] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.738 [2024-07-20 18:09:08.361331] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.738 [2024-07-20 18:09:08.364906] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.738 [2024-07-20 18:09:08.374188] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.738 [2024-07-20 18:09:08.374714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.738 [2024-07-20 18:09:08.374746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.738 [2024-07-20 18:09:08.374765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.738 [2024-07-20 18:09:08.375016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.738 [2024-07-20 18:09:08.375262] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.738 [2024-07-20 18:09:08.375287] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.738 [2024-07-20 18:09:08.375303] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.739 [2024-07-20 18:09:08.378880] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.739 [2024-07-20 18:09:08.388188] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.739 [2024-07-20 18:09:08.388695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.739 [2024-07-20 18:09:08.388731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.739 [2024-07-20 18:09:08.388749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.739 [2024-07-20 18:09:08.388997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.739 [2024-07-20 18:09:08.389251] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.739 [2024-07-20 18:09:08.389272] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.739 [2024-07-20 18:09:08.389286] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.739 [2024-07-20 18:09:08.392374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.739 [2024-07-20 18:09:08.402166] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.739 [2024-07-20 18:09:08.402688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.739 [2024-07-20 18:09:08.402719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.739 [2024-07-20 18:09:08.402737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.739 [2024-07-20 18:09:08.402986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.739 [2024-07-20 18:09:08.403231] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.739 [2024-07-20 18:09:08.403255] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.739 [2024-07-20 18:09:08.403272] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.739 [2024-07-20 18:09:08.406854] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.739 [2024-07-20 18:09:08.415704] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.739 [2024-07-20 18:09:08.416180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.739 [2024-07-20 18:09:08.416208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.739 [2024-07-20 18:09:08.416225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.739 [2024-07-20 18:09:08.416473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.739 [2024-07-20 18:09:08.416666] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.739 [2024-07-20 18:09:08.416686] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.739 [2024-07-20 18:09:08.416699] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.739 [2024-07-20 18:09:08.419754] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.739 [2024-07-20 18:09:08.429719] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.739 [2024-07-20 18:09:08.430396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.739 [2024-07-20 18:09:08.430449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.739 [2024-07-20 18:09:08.430468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.739 [2024-07-20 18:09:08.430707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.739 [2024-07-20 18:09:08.430979] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.739 [2024-07-20 18:09:08.431003] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.739 [2024-07-20 18:09:08.431018] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.739 [2024-07-20 18:09:08.434642] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.739 [2024-07-20 18:09:08.443801] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.739 [2024-07-20 18:09:08.444380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.739 [2024-07-20 18:09:08.444430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.739 [2024-07-20 18:09:08.444448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.739 [2024-07-20 18:09:08.444686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.739 [2024-07-20 18:09:08.444946] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.739 [2024-07-20 18:09:08.444969] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.739 [2024-07-20 18:09:08.444984] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.739 [2024-07-20 18:09:08.448581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.739 [2024-07-20 18:09:08.457754] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.739 [2024-07-20 18:09:08.458279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.739 [2024-07-20 18:09:08.458311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.739 [2024-07-20 18:09:08.458329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.739 [2024-07-20 18:09:08.458567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.739 [2024-07-20 18:09:08.458824] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.739 [2024-07-20 18:09:08.458864] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.739 [2024-07-20 18:09:08.458879] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.739 [2024-07-20 18:09:08.462452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.739 [2024-07-20 18:09:08.471645] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.739 [2024-07-20 18:09:08.472166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.739 [2024-07-20 18:09:08.472198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.739 [2024-07-20 18:09:08.472217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.739 [2024-07-20 18:09:08.472455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.739 [2024-07-20 18:09:08.472686] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.739 [2024-07-20 18:09:08.472707] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.739 [2024-07-20 18:09:08.472721] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.739 [2024-07-20 18:09:08.476306] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.739 [2024-07-20 18:09:08.485628] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.739 [2024-07-20 18:09:08.486133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.739 [2024-07-20 18:09:08.486165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.739 [2024-07-20 18:09:08.486183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.739 [2024-07-20 18:09:08.486422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.739 [2024-07-20 18:09:08.486665] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.739 [2024-07-20 18:09:08.486689] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.739 [2024-07-20 18:09:08.486705] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.739 [2024-07-20 18:09:08.490322] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.739 [2024-07-20 18:09:08.499678] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.739 [2024-07-20 18:09:08.500177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.739 [2024-07-20 18:09:08.500209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.739 [2024-07-20 18:09:08.500228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.739 [2024-07-20 18:09:08.500466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.739 [2024-07-20 18:09:08.500708] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.739 [2024-07-20 18:09:08.500734] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.739 [2024-07-20 18:09:08.500750] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.739 [2024-07-20 18:09:08.504362] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.739 [2024-07-20 18:09:08.513729] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.739 [2024-07-20 18:09:08.514282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.739 [2024-07-20 18:09:08.514335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.739 [2024-07-20 18:09:08.514353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.739 [2024-07-20 18:09:08.514592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.739 [2024-07-20 18:09:08.514859] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.739 [2024-07-20 18:09:08.514883] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.739 [2024-07-20 18:09:08.514898] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.739 [2024-07-20 18:09:08.518477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.739 [2024-07-20 18:09:08.527743] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.739 [2024-07-20 18:09:08.528248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.739 [2024-07-20 18:09:08.528279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.739 [2024-07-20 18:09:08.528304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.739 [2024-07-20 18:09:08.528544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.739 [2024-07-20 18:09:08.528788] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.739 [2024-07-20 18:09:08.528823] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.740 [2024-07-20 18:09:08.528841] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.740 [2024-07-20 18:09:08.532406] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.999 [2024-07-20 18:09:08.541666] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.999 [2024-07-20 18:09:08.542204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.999 [2024-07-20 18:09:08.542236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.999 [2024-07-20 18:09:08.542254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.999 [2024-07-20 18:09:08.542494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.999 [2024-07-20 18:09:08.542737] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.999 [2024-07-20 18:09:08.542762] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.999 [2024-07-20 18:09:08.542779] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.999 [2024-07-20 18:09:08.546349] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.999 [2024-07-20 18:09:08.555614] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.999 [2024-07-20 18:09:08.556155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.999 [2024-07-20 18:09:08.556187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.999 [2024-07-20 18:09:08.556205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.999 [2024-07-20 18:09:08.556447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.999 [2024-07-20 18:09:08.556690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.999 [2024-07-20 18:09:08.556714] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.999 [2024-07-20 18:09:08.556730] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.999 [2024-07-20 18:09:08.560304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.999 [2024-07-20 18:09:08.569559] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.999 [2024-07-20 18:09:08.570083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.999 [2024-07-20 18:09:08.570114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.999 [2024-07-20 18:09:08.570132] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.999 [2024-07-20 18:09:08.570370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.999 [2024-07-20 18:09:08.570614] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.999 [2024-07-20 18:09:08.570643] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.999 [2024-07-20 18:09:08.570660] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.999 [2024-07-20 18:09:08.574233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.999 [2024-07-20 18:09:08.583513] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.999 [2024-07-20 18:09:08.584043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.999 [2024-07-20 18:09:08.584074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.999 [2024-07-20 18:09:08.584099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.999 [2024-07-20 18:09:08.584353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.999 [2024-07-20 18:09:08.584597] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.999 [2024-07-20 18:09:08.584622] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.999 [2024-07-20 18:09:08.584638] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.999 [2024-07-20 18:09:08.588215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.999 [2024-07-20 18:09:08.597477] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.999 [2024-07-20 18:09:08.597984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.999 [2024-07-20 18:09:08.598012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.999 [2024-07-20 18:09:08.598029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.999 [2024-07-20 18:09:08.598287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.999 [2024-07-20 18:09:08.598531] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.999 [2024-07-20 18:09:08.598555] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.999 [2024-07-20 18:09:08.598571] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.999 [2024-07-20 18:09:08.602197] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.999 [2024-07-20 18:09:08.611352] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.999 [2024-07-20 18:09:08.611899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.999 [2024-07-20 18:09:08.611928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.999 [2024-07-20 18:09:08.611945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.999 [2024-07-20 18:09:08.612186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.999 [2024-07-20 18:09:08.612430] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.999 [2024-07-20 18:09:08.612455] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.999 [2024-07-20 18:09:08.612471] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.999 [2024-07-20 18:09:08.616105] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.999 [2024-07-20 18:09:08.625262] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.999 [2024-07-20 18:09:08.625762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.999 [2024-07-20 18:09:08.625808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.999 [2024-07-20 18:09:08.625843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.999 [2024-07-20 18:09:08.626060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.999 [2024-07-20 18:09:08.626336] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.999 [2024-07-20 18:09:08.626361] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.999 [2024-07-20 18:09:08.626378] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.999 [2024-07-20 18:09:08.629976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.999 [2024-07-20 18:09:08.639275] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:33.999 [2024-07-20 18:09:08.639804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.999 [2024-07-20 18:09:08.639836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:33.999 [2024-07-20 18:09:08.639871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:33.999 [2024-07-20 18:09:08.640113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:33.999 [2024-07-20 18:09:08.640367] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.999 [2024-07-20 18:09:08.640391] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.999 [2024-07-20 18:09:08.640408] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.999 [2024-07-20 18:09:08.643974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:33.999 [2024-07-20 18:09:08.653234] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.000 [2024-07-20 18:09:08.653800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.000 [2024-07-20 18:09:08.653846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.000 [2024-07-20 18:09:08.653865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.000 [2024-07-20 18:09:08.654103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.000 [2024-07-20 18:09:08.654347] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.000 [2024-07-20 18:09:08.654371] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.000 [2024-07-20 18:09:08.654387] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.000 [2024-07-20 18:09:08.657961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.000 [2024-07-20 18:09:08.667198] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.000 [2024-07-20 18:09:08.667753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.000 [2024-07-20 18:09:08.667803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.000 [2024-07-20 18:09:08.667823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.000 [2024-07-20 18:09:08.668075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.000 [2024-07-20 18:09:08.668326] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.000 [2024-07-20 18:09:08.668350] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.000 [2024-07-20 18:09:08.668367] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.000 [2024-07-20 18:09:08.671935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.000 [2024-07-20 18:09:08.681187] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.000 [2024-07-20 18:09:08.681714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.000 [2024-07-20 18:09:08.681740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.000 [2024-07-20 18:09:08.681756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.000 [2024-07-20 18:09:08.682024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.000 [2024-07-20 18:09:08.682268] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.000 [2024-07-20 18:09:08.682293] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.000 [2024-07-20 18:09:08.682309] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.000 [2024-07-20 18:09:08.685680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.000 [2024-07-20 18:09:08.695045] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.000 [2024-07-20 18:09:08.695513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.000 [2024-07-20 18:09:08.695540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.000 [2024-07-20 18:09:08.695556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.000 [2024-07-20 18:09:08.695814] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.000 [2024-07-20 18:09:08.696048] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.000 [2024-07-20 18:09:08.696071] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.000 [2024-07-20 18:09:08.696089] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.000 [2024-07-20 18:09:08.699723] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.000 [2024-07-20 18:09:08.709005] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.000 [2024-07-20 18:09:08.709534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.000 [2024-07-20 18:09:08.709565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.000 [2024-07-20 18:09:08.709584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.000 [2024-07-20 18:09:08.709833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.000 [2024-07-20 18:09:08.710084] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.000 [2024-07-20 18:09:08.710106] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.000 [2024-07-20 18:09:08.710125] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.000 [2024-07-20 18:09:08.713683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.000 [2024-07-20 18:09:08.722540] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.000 [2024-07-20 18:09:08.722987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.000 [2024-07-20 18:09:08.723015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.000 [2024-07-20 18:09:08.723032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.000 [2024-07-20 18:09:08.723258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.000 [2024-07-20 18:09:08.723471] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.000 [2024-07-20 18:09:08.723492] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.000 [2024-07-20 18:09:08.723507] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.000 [2024-07-20 18:09:08.726734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.000 [2024-07-20 18:09:08.735953] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.000 [2024-07-20 18:09:08.736577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.000 [2024-07-20 18:09:08.736632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.000 [2024-07-20 18:09:08.736649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.000 [2024-07-20 18:09:08.736896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.000 [2024-07-20 18:09:08.737125] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.000 [2024-07-20 18:09:08.737161] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.000 [2024-07-20 18:09:08.737175] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.000 [2024-07-20 18:09:08.740395] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.000 [2024-07-20 18:09:08.749777] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.000 [2024-07-20 18:09:08.750324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.000 [2024-07-20 18:09:08.750358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.000 [2024-07-20 18:09:08.750377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.000 [2024-07-20 18:09:08.750610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.000 [2024-07-20 18:09:08.750835] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.000 [2024-07-20 18:09:08.750857] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.000 [2024-07-20 18:09:08.750871] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.000 [2024-07-20 18:09:08.754333] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.000 [2024-07-20 18:09:08.763631] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.000 [2024-07-20 18:09:08.764186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.000 [2024-07-20 18:09:08.764218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.000 [2024-07-20 18:09:08.764237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.000 [2024-07-20 18:09:08.764476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.000 [2024-07-20 18:09:08.764719] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.000 [2024-07-20 18:09:08.764744] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.000 [2024-07-20 18:09:08.764761] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.000 [2024-07-20 18:09:08.768343] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.000 [2024-07-20 18:09:08.777598] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.000 [2024-07-20 18:09:08.778115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.000 [2024-07-20 18:09:08.778147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.000 [2024-07-20 18:09:08.778165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.000 [2024-07-20 18:09:08.778404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.001 [2024-07-20 18:09:08.778647] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.001 [2024-07-20 18:09:08.778672] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.001 [2024-07-20 18:09:08.778688] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.001 [2024-07-20 18:09:08.782262] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.001 [2024-07-20 18:09:08.791524] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.001 [2024-07-20 18:09:08.792034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.001 [2024-07-20 18:09:08.792066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.001 [2024-07-20 18:09:08.792090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.001 [2024-07-20 18:09:08.792329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.001 [2024-07-20 18:09:08.792572] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.001 [2024-07-20 18:09:08.792596] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.001 [2024-07-20 18:09:08.792612] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.259 [2024-07-20 18:09:08.796195] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.259 [2024-07-20 18:09:08.805387] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.259 [2024-07-20 18:09:08.805953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.259 [2024-07-20 18:09:08.805983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.259 [2024-07-20 18:09:08.805999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.259 [2024-07-20 18:09:08.806246] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.259 [2024-07-20 18:09:08.806496] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.259 [2024-07-20 18:09:08.806520] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.259 [2024-07-20 18:09:08.806537] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.259 [2024-07-20 18:09:08.810162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.259 [2024-07-20 18:09:08.819394] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.259 [2024-07-20 18:09:08.819927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.259 [2024-07-20 18:09:08.819957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.259 [2024-07-20 18:09:08.819974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.259 [2024-07-20 18:09:08.820214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.259 [2024-07-20 18:09:08.820456] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.259 [2024-07-20 18:09:08.820477] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.259 [2024-07-20 18:09:08.820491] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.259 [2024-07-20 18:09:08.824025] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.260 [2024-07-20 18:09:08.833378] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.260 [2024-07-20 18:09:08.833899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.260 [2024-07-20 18:09:08.833928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.260 [2024-07-20 18:09:08.833945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.260 [2024-07-20 18:09:08.834203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.260 [2024-07-20 18:09:08.834447] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.260 [2024-07-20 18:09:08.834472] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.260 [2024-07-20 18:09:08.834488] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.260 [2024-07-20 18:09:08.838094] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.260 [2024-07-20 18:09:08.847304] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.260 [2024-07-20 18:09:08.847889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.260 [2024-07-20 18:09:08.847929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.260 [2024-07-20 18:09:08.847947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.260 [2024-07-20 18:09:08.848193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.260 [2024-07-20 18:09:08.848437] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.260 [2024-07-20 18:09:08.848462] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.260 [2024-07-20 18:09:08.848478] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.260 [2024-07-20 18:09:08.852101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.260 [2024-07-20 18:09:08.861213] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.260 [2024-07-20 18:09:08.861741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.260 [2024-07-20 18:09:08.861773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.260 [2024-07-20 18:09:08.861810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.260 [2024-07-20 18:09:08.862045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.260 [2024-07-20 18:09:08.862301] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.260 [2024-07-20 18:09:08.862326] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.260 [2024-07-20 18:09:08.862343] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.260 [2024-07-20 18:09:08.865896] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.260 [2024-07-20 18:09:08.875122] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.260 [2024-07-20 18:09:08.875621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.260 [2024-07-20 18:09:08.875653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.260 [2024-07-20 18:09:08.875681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.260 [2024-07-20 18:09:08.875929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.260 [2024-07-20 18:09:08.876174] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.260 [2024-07-20 18:09:08.876198] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.260 [2024-07-20 18:09:08.876215] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.260 [2024-07-20 18:09:08.879773] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.260 [2024-07-20 18:09:08.889028] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.260 [2024-07-20 18:09:08.889604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.260 [2024-07-20 18:09:08.889653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.260 [2024-07-20 18:09:08.889672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.260 [2024-07-20 18:09:08.889920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.260 [2024-07-20 18:09:08.890163] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.260 [2024-07-20 18:09:08.890187] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.260 [2024-07-20 18:09:08.890204] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.260 [2024-07-20 18:09:08.893776] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.260 [2024-07-20 18:09:08.903035] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.260 [2024-07-20 18:09:08.903578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.260 [2024-07-20 18:09:08.903619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.260 [2024-07-20 18:09:08.903643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.260 [2024-07-20 18:09:08.903895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.260 [2024-07-20 18:09:08.904139] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.260 [2024-07-20 18:09:08.904173] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.260 [2024-07-20 18:09:08.904189] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.260 [2024-07-20 18:09:08.907756] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.260 [2024-07-20 18:09:08.916855] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.260 [2024-07-20 18:09:08.917330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.260 [2024-07-20 18:09:08.917362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.260 [2024-07-20 18:09:08.917380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.260 [2024-07-20 18:09:08.917619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.260 [2024-07-20 18:09:08.917886] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.260 [2024-07-20 18:09:08.917909] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.260 [2024-07-20 18:09:08.917924] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.260 [2024-07-20 18:09:08.921469] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.260 [2024-07-20 18:09:08.930830] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.260 [2024-07-20 18:09:08.931470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.260 [2024-07-20 18:09:08.931524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.260 [2024-07-20 18:09:08.931542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.260 [2024-07-20 18:09:08.931772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.260 [2024-07-20 18:09:08.932027] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.260 [2024-07-20 18:09:08.932050] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.260 [2024-07-20 18:09:08.932065] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.260 [2024-07-20 18:09:08.935190] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.260 [2024-07-20 18:09:08.944733] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.260 [2024-07-20 18:09:08.945277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.260 [2024-07-20 18:09:08.945321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.260 [2024-07-20 18:09:08.945341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.260 [2024-07-20 18:09:08.945580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.260 [2024-07-20 18:09:08.945833] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.260 [2024-07-20 18:09:08.945865] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.260 [2024-07-20 18:09:08.945883] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.260 [2024-07-20 18:09:08.949448] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.260 [2024-07-20 18:09:08.958807] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.260 [2024-07-20 18:09:08.959422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.260 [2024-07-20 18:09:08.959467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.260 [2024-07-20 18:09:08.959495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.260 [2024-07-20 18:09:08.959740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.260 [2024-07-20 18:09:08.959999] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.260 [2024-07-20 18:09:08.960023] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.260 [2024-07-20 18:09:08.960039] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.260 [2024-07-20 18:09:08.963668] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.260 [2024-07-20 18:09:08.972315] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.260 [2024-07-20 18:09:08.972928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.260 [2024-07-20 18:09:08.972959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.260 [2024-07-20 18:09:08.972976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.260 [2024-07-20 18:09:08.973231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.260 [2024-07-20 18:09:08.973475] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.261 [2024-07-20 18:09:08.973500] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.261 [2024-07-20 18:09:08.973516] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.261 [2024-07-20 18:09:08.977154] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.261 [2024-07-20 18:09:08.986344] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.261 [2024-07-20 18:09:08.986853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.261 [2024-07-20 18:09:08.986886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.261 [2024-07-20 18:09:08.986905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.261 [2024-07-20 18:09:08.987143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.261 [2024-07-20 18:09:08.987387] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.261 [2024-07-20 18:09:08.987411] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.261 [2024-07-20 18:09:08.987428] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.261 [2024-07-20 18:09:08.990999] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.261 [2024-07-20 18:09:09.000396] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.261 [2024-07-20 18:09:09.000932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.261 [2024-07-20 18:09:09.000963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.261 [2024-07-20 18:09:09.000990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.261 [2024-07-20 18:09:09.001236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.261 [2024-07-20 18:09:09.001481] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.261 [2024-07-20 18:09:09.001505] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.261 [2024-07-20 18:09:09.001522] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.261 [2024-07-20 18:09:09.005149] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.261 [2024-07-20 18:09:09.014311] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.261 [2024-07-20 18:09:09.014836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.261 [2024-07-20 18:09:09.014866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.261 [2024-07-20 18:09:09.014884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.261 [2024-07-20 18:09:09.015134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.261 [2024-07-20 18:09:09.015378] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.261 [2024-07-20 18:09:09.015403] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.261 [2024-07-20 18:09:09.015421] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.261 [2024-07-20 18:09:09.019045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.261 [2024-07-20 18:09:09.028229] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.261 [2024-07-20 18:09:09.028775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.261 [2024-07-20 18:09:09.028822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.261 [2024-07-20 18:09:09.028844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.261 [2024-07-20 18:09:09.029101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.261 [2024-07-20 18:09:09.029351] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.261 [2024-07-20 18:09:09.029376] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.261 [2024-07-20 18:09:09.029393] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.261 [2024-07-20 18:09:09.033004] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.261 [2024-07-20 18:09:09.042178] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.261 [2024-07-20 18:09:09.042697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.261 [2024-07-20 18:09:09.042744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.261 [2024-07-20 18:09:09.042764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.261 [2024-07-20 18:09:09.043016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.261 [2024-07-20 18:09:09.043274] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.261 [2024-07-20 18:09:09.043299] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.261 [2024-07-20 18:09:09.043316] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.261 [2024-07-20 18:09:09.046922] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.520 [2024-07-20 18:09:09.055818] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.520 [2024-07-20 18:09:09.056346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.520 [2024-07-20 18:09:09.056376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.520 [2024-07-20 18:09:09.056392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.520 [2024-07-20 18:09:09.056607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.520 [2024-07-20 18:09:09.056851] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.520 [2024-07-20 18:09:09.056873] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.520 [2024-07-20 18:09:09.056888] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.520 [2024-07-20 18:09:09.060130] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.520 [2024-07-20 18:09:09.069426] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.520 [2024-07-20 18:09:09.069938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.520 [2024-07-20 18:09:09.069968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.520 [2024-07-20 18:09:09.069985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.520 [2024-07-20 18:09:09.070219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.520 [2024-07-20 18:09:09.070413] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.520 [2024-07-20 18:09:09.070433] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.520 [2024-07-20 18:09:09.070446] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.520 [2024-07-20 18:09:09.073479] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.520 [2024-07-20 18:09:09.082713] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.520 [2024-07-20 18:09:09.083268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.520 [2024-07-20 18:09:09.083296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.520 [2024-07-20 18:09:09.083329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.520 [2024-07-20 18:09:09.083535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.520 [2024-07-20 18:09:09.083729] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.520 [2024-07-20 18:09:09.083748] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.520 [2024-07-20 18:09:09.083769] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.520 [2024-07-20 18:09:09.086816] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.520 [2024-07-20 18:09:09.096180] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.520 [2024-07-20 18:09:09.096662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.520 [2024-07-20 18:09:09.096690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.520 [2024-07-20 18:09:09.096707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.520 [2024-07-20 18:09:09.096941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.520 [2024-07-20 18:09:09.097180] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.520 [2024-07-20 18:09:09.097201] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.520 [2024-07-20 18:09:09.097214] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.520 [2024-07-20 18:09:09.100187] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.520 [2024-07-20 18:09:09.110056] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.520 [2024-07-20 18:09:09.110597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.520 [2024-07-20 18:09:09.110629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.520 [2024-07-20 18:09:09.110647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.520 [2024-07-20 18:09:09.110905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.520 [2024-07-20 18:09:09.111130] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.520 [2024-07-20 18:09:09.111166] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.520 [2024-07-20 18:09:09.111183] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.520 [2024-07-20 18:09:09.114735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.520 [2024-07-20 18:09:09.123951] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.520 [2024-07-20 18:09:09.124486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.520 [2024-07-20 18:09:09.124517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.520 [2024-07-20 18:09:09.124535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.520 [2024-07-20 18:09:09.124773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.520 [2024-07-20 18:09:09.125019] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.520 [2024-07-20 18:09:09.125042] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.520 [2024-07-20 18:09:09.125056] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.520 [2024-07-20 18:09:09.128458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.520 [2024-07-20 18:09:09.137858] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.520 [2024-07-20 18:09:09.138397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.520 [2024-07-20 18:09:09.138428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.520 [2024-07-20 18:09:09.138447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.520 [2024-07-20 18:09:09.138685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.520 [2024-07-20 18:09:09.138941] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.520 [2024-07-20 18:09:09.138964] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.520 [2024-07-20 18:09:09.138979] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.520 [2024-07-20 18:09:09.142588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.520 [2024-07-20 18:09:09.151696] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.520 [2024-07-20 18:09:09.152181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.520 [2024-07-20 18:09:09.152213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.520 [2024-07-20 18:09:09.152231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.520 [2024-07-20 18:09:09.152461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.520 [2024-07-20 18:09:09.152696] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.520 [2024-07-20 18:09:09.152720] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.520 [2024-07-20 18:09:09.152735] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.520 [2024-07-20 18:09:09.156172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.520 [2024-07-20 18:09:09.165068] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.520 [2024-07-20 18:09:09.165544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.520 [2024-07-20 18:09:09.165571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.520 [2024-07-20 18:09:09.165602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.520 [2024-07-20 18:09:09.165850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.520 [2024-07-20 18:09:09.166055] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.520 [2024-07-20 18:09:09.166075] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.520 [2024-07-20 18:09:09.166089] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.521 [2024-07-20 18:09:09.169106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.521 [2024-07-20 18:09:09.178969] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.521 [2024-07-20 18:09:09.179488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.521 [2024-07-20 18:09:09.179519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.521 [2024-07-20 18:09:09.179537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.521 [2024-07-20 18:09:09.179775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.521 [2024-07-20 18:09:09.180020] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.521 [2024-07-20 18:09:09.180042] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.521 [2024-07-20 18:09:09.180056] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.521 [2024-07-20 18:09:09.183655] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.521 [2024-07-20 18:09:09.192859] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.521 [2024-07-20 18:09:09.193335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.521 [2024-07-20 18:09:09.193365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.521 [2024-07-20 18:09:09.193382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.521 [2024-07-20 18:09:09.193611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.521 [2024-07-20 18:09:09.193869] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.521 [2024-07-20 18:09:09.193891] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.521 [2024-07-20 18:09:09.193905] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.521 [2024-07-20 18:09:09.197233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.521 [2024-07-20 18:09:09.206312] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.521 [2024-07-20 18:09:09.206748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.521 [2024-07-20 18:09:09.206790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.521 [2024-07-20 18:09:09.206824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.521 [2024-07-20 18:09:09.207048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.521 [2024-07-20 18:09:09.207253] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.521 [2024-07-20 18:09:09.207273] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.521 [2024-07-20 18:09:09.207286] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.521 [2024-07-20 18:09:09.210264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.521 [2024-07-20 18:09:09.220171] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.521 [2024-07-20 18:09:09.220676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.521 [2024-07-20 18:09:09.220708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.521 [2024-07-20 18:09:09.220726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.521 [2024-07-20 18:09:09.220990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.521 [2024-07-20 18:09:09.221246] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.521 [2024-07-20 18:09:09.221270] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.521 [2024-07-20 18:09:09.221286] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.521 [2024-07-20 18:09:09.224861] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.521 [2024-07-20 18:09:09.233924] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.521 [2024-07-20 18:09:09.234386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.521 [2024-07-20 18:09:09.234415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.521 [2024-07-20 18:09:09.234432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.521 [2024-07-20 18:09:09.234654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.521 [2024-07-20 18:09:09.234894] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.521 [2024-07-20 18:09:09.234917] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.521 [2024-07-20 18:09:09.234932] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.521 [2024-07-20 18:09:09.238184] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.521 [2024-07-20 18:09:09.247321] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.521 [2024-07-20 18:09:09.247812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.521 [2024-07-20 18:09:09.247840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.521 [2024-07-20 18:09:09.247856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.521 [2024-07-20 18:09:09.248105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.521 [2024-07-20 18:09:09.248304] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.521 [2024-07-20 18:09:09.248324] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.521 [2024-07-20 18:09:09.248337] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.521 [2024-07-20 18:09:09.251291] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.521 [2024-07-20 18:09:09.261233] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.521 [2024-07-20 18:09:09.261759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.521 [2024-07-20 18:09:09.261790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.521 [2024-07-20 18:09:09.261824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.521 [2024-07-20 18:09:09.262061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.521 [2024-07-20 18:09:09.262319] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.521 [2024-07-20 18:09:09.262343] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.521 [2024-07-20 18:09:09.262359] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.521 [2024-07-20 18:09:09.265936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.521 [2024-07-20 18:09:09.274968] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.521 [2024-07-20 18:09:09.275453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.521 [2024-07-20 18:09:09.275481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.521 [2024-07-20 18:09:09.275503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.521 [2024-07-20 18:09:09.275718] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.521 [2024-07-20 18:09:09.275947] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.521 [2024-07-20 18:09:09.275969] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.521 [2024-07-20 18:09:09.275983] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.521 [2024-07-20 18:09:09.279128] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.521 [2024-07-20 18:09:09.288145] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.521 [2024-07-20 18:09:09.288901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.521 [2024-07-20 18:09:09.288940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.521 [2024-07-20 18:09:09.288972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.521 [2024-07-20 18:09:09.289207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.521 [2024-07-20 18:09:09.289452] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.521 [2024-07-20 18:09:09.289476] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.521 [2024-07-20 18:09:09.289492] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.521 [2024-07-20 18:09:09.293041] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.521 [2024-07-20 18:09:09.302099] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.521 [2024-07-20 18:09:09.302649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.521 [2024-07-20 18:09:09.302677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.521 [2024-07-20 18:09:09.302694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.521 [2024-07-20 18:09:09.302946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.521 [2024-07-20 18:09:09.303189] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.521 [2024-07-20 18:09:09.303214] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.521 [2024-07-20 18:09:09.303229] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.521 [2024-07-20 18:09:09.306798] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.780 [2024-07-20 18:09:09.316081] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.780 [2024-07-20 18:09:09.316690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.780 [2024-07-20 18:09:09.316744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.780 [2024-07-20 18:09:09.316765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.780 [2024-07-20 18:09:09.317025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.780 [2024-07-20 18:09:09.317274] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.780 [2024-07-20 18:09:09.317305] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.780 [2024-07-20 18:09:09.317322] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.780 [2024-07-20 18:09:09.320898] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.780 [2024-07-20 18:09:09.329983] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.780 [2024-07-20 18:09:09.330515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.780 [2024-07-20 18:09:09.330547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.780 [2024-07-20 18:09:09.330566] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.780 [2024-07-20 18:09:09.330813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.780 [2024-07-20 18:09:09.331057] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.780 [2024-07-20 18:09:09.331080] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.780 [2024-07-20 18:09:09.331096] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.780 [2024-07-20 18:09:09.334657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.780 [2024-07-20 18:09:09.343700] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.780 [2024-07-20 18:09:09.344190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.780 [2024-07-20 18:09:09.344233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.780 [2024-07-20 18:09:09.344250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.780 [2024-07-20 18:09:09.344499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.780 [2024-07-20 18:09:09.344725] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.780 [2024-07-20 18:09:09.344747] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.780 [2024-07-20 18:09:09.344761] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.780 [2024-07-20 18:09:09.348097] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.781 [2024-07-20 18:09:09.357051] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.781 [2024-07-20 18:09:09.357812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.781 [2024-07-20 18:09:09.357862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.781 [2024-07-20 18:09:09.357880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.781 [2024-07-20 18:09:09.358096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.781 [2024-07-20 18:09:09.358297] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.781 [2024-07-20 18:09:09.358317] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.781 [2024-07-20 18:09:09.358330] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.781 [2024-07-20 18:09:09.361446] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.781 [2024-07-20 18:09:09.370952] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.781 [2024-07-20 18:09:09.371486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.781 [2024-07-20 18:09:09.371518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.781 [2024-07-20 18:09:09.371537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.781 [2024-07-20 18:09:09.371775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.781 [2024-07-20 18:09:09.372016] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.781 [2024-07-20 18:09:09.372038] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.781 [2024-07-20 18:09:09.372052] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.781 [2024-07-20 18:09:09.375630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.781 [2024-07-20 18:09:09.384597] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.781 [2024-07-20 18:09:09.385068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.781 [2024-07-20 18:09:09.385097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.781 [2024-07-20 18:09:09.385113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.781 [2024-07-20 18:09:09.385327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.781 [2024-07-20 18:09:09.385546] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.781 [2024-07-20 18:09:09.385568] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.781 [2024-07-20 18:09:09.385582] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.781 [2024-07-20 18:09:09.388743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.781 [2024-07-20 18:09:09.397855] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.781 [2024-07-20 18:09:09.398353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.781 [2024-07-20 18:09:09.398384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.781 [2024-07-20 18:09:09.398402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.781 [2024-07-20 18:09:09.398641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.781 [2024-07-20 18:09:09.398904] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.781 [2024-07-20 18:09:09.398927] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.781 [2024-07-20 18:09:09.398941] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.781 [2024-07-20 18:09:09.402507] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.781 [2024-07-20 18:09:09.411766] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.781 [2024-07-20 18:09:09.412440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.781 [2024-07-20 18:09:09.412495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.781 [2024-07-20 18:09:09.412516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.781 [2024-07-20 18:09:09.412768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.781 [2024-07-20 18:09:09.413018] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.781 [2024-07-20 18:09:09.413041] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.781 [2024-07-20 18:09:09.413055] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.781 [2024-07-20 18:09:09.416560] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.781 [2024-07-20 18:09:09.425233] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.781 [2024-07-20 18:09:09.425704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.781 [2024-07-20 18:09:09.425747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.781 [2024-07-20 18:09:09.425764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.781 [2024-07-20 18:09:09.425992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.781 [2024-07-20 18:09:09.426198] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.781 [2024-07-20 18:09:09.426219] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.781 [2024-07-20 18:09:09.426232] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.781 [2024-07-20 18:09:09.429354] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.781 [2024-07-20 18:09:09.438963] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.781 [2024-07-20 18:09:09.439474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.781 [2024-07-20 18:09:09.439503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.781 [2024-07-20 18:09:09.439519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.781 [2024-07-20 18:09:09.439771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.781 [2024-07-20 18:09:09.440015] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.781 [2024-07-20 18:09:09.440036] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.781 [2024-07-20 18:09:09.440051] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.781 [2024-07-20 18:09:09.443641] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.781 [2024-07-20 18:09:09.452899] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.781 [2024-07-20 18:09:09.453412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.781 [2024-07-20 18:09:09.453442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.781 [2024-07-20 18:09:09.453460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.781 [2024-07-20 18:09:09.453689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.781 [2024-07-20 18:09:09.453939] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.781 [2024-07-20 18:09:09.453961] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.781 [2024-07-20 18:09:09.453981] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.781 [2024-07-20 18:09:09.457396] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.781 [2024-07-20 18:09:09.466259] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.781 [2024-07-20 18:09:09.466739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.781 [2024-07-20 18:09:09.466766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.781 [2024-07-20 18:09:09.466782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.781 [2024-07-20 18:09:09.467019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.781 [2024-07-20 18:09:09.467245] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.781 [2024-07-20 18:09:09.467266] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.781 [2024-07-20 18:09:09.467279] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.781 [2024-07-20 18:09:09.470288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.781 [2024-07-20 18:09:09.480263] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.781 [2024-07-20 18:09:09.480769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.781 [2024-07-20 18:09:09.480809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.781 [2024-07-20 18:09:09.480829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.781 [2024-07-20 18:09:09.481071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.781 [2024-07-20 18:09:09.481344] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.781 [2024-07-20 18:09:09.481368] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.781 [2024-07-20 18:09:09.481384] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.781 [2024-07-20 18:09:09.484945] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.781 [2024-07-20 18:09:09.494197] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.781 [2024-07-20 18:09:09.494704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.781 [2024-07-20 18:09:09.494746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.781 [2024-07-20 18:09:09.494763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.781 [2024-07-20 18:09:09.494985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.781 [2024-07-20 18:09:09.495242] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.781 [2024-07-20 18:09:09.495267] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.782 [2024-07-20 18:09:09.495283] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.782 [2024-07-20 18:09:09.498831] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.782 [2024-07-20 18:09:09.507987] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.782 [2024-07-20 18:09:09.508496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.782 [2024-07-20 18:09:09.508539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.782 [2024-07-20 18:09:09.508555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.782 [2024-07-20 18:09:09.508787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.782 [2024-07-20 18:09:09.509026] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.782 [2024-07-20 18:09:09.509048] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.782 [2024-07-20 18:09:09.509077] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.782 [2024-07-20 18:09:09.512649] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.782 [2024-07-20 18:09:09.521929] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.782 [2024-07-20 18:09:09.522466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.782 [2024-07-20 18:09:09.522508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.782 [2024-07-20 18:09:09.522525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.782 [2024-07-20 18:09:09.522781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.782 [2024-07-20 18:09:09.523034] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.782 [2024-07-20 18:09:09.523058] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.782 [2024-07-20 18:09:09.523074] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.782 [2024-07-20 18:09:09.526646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.782 [2024-07-20 18:09:09.535899] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.782 [2024-07-20 18:09:09.536396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.782 [2024-07-20 18:09:09.536423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.782 [2024-07-20 18:09:09.536439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.782 [2024-07-20 18:09:09.536692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.782 [2024-07-20 18:09:09.536950] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.782 [2024-07-20 18:09:09.536975] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.782 [2024-07-20 18:09:09.536991] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.782 [2024-07-20 18:09:09.540563] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.782 [2024-07-20 18:09:09.549871] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.782 [2024-07-20 18:09:09.550394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.782 [2024-07-20 18:09:09.550437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.782 [2024-07-20 18:09:09.550453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.782 [2024-07-20 18:09:09.550709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.782 [2024-07-20 18:09:09.550970] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.782 [2024-07-20 18:09:09.550996] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.782 [2024-07-20 18:09:09.551012] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.782 [2024-07-20 18:09:09.554581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.782 [2024-07-20 18:09:09.563882] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.782 [2024-07-20 18:09:09.564383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.782 [2024-07-20 18:09:09.564424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:34.782 [2024-07-20 18:09:09.564440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:34.782 [2024-07-20 18:09:09.564687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:34.782 [2024-07-20 18:09:09.564944] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:34.782 [2024-07-20 18:09:09.564969] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:34.782 [2024-07-20 18:09:09.564986] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.782 [2024-07-20 18:09:09.568551] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.040 [2024-07-20 18:09:09.577835] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.040 [2024-07-20 18:09:09.578358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.040 [2024-07-20 18:09:09.578388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.040 [2024-07-20 18:09:09.578406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.040 [2024-07-20 18:09:09.578644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.041 [2024-07-20 18:09:09.578896] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.041 [2024-07-20 18:09:09.578920] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.041 [2024-07-20 18:09:09.578936] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.041 [2024-07-20 18:09:09.582503] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.041 [2024-07-20 18:09:09.591768] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.041 [2024-07-20 18:09:09.592285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.041 [2024-07-20 18:09:09.592311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.041 [2024-07-20 18:09:09.592326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.041 [2024-07-20 18:09:09.592539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.041 [2024-07-20 18:09:09.592815] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.041 [2024-07-20 18:09:09.592839] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.041 [2024-07-20 18:09:09.592855] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.041 [2024-07-20 18:09:09.596430] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.041 [2024-07-20 18:09:09.605710] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.041 [2024-07-20 18:09:09.606319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.041 [2024-07-20 18:09:09.606364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.041 [2024-07-20 18:09:09.606384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.041 [2024-07-20 18:09:09.606629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.041 [2024-07-20 18:09:09.606888] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.041 [2024-07-20 18:09:09.606913] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.041 [2024-07-20 18:09:09.606929] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.041 [2024-07-20 18:09:09.610501] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.041 [2024-07-20 18:09:09.619568] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.041 [2024-07-20 18:09:09.620104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.041 [2024-07-20 18:09:09.620133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.041 [2024-07-20 18:09:09.620150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.041 [2024-07-20 18:09:09.620397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.041 [2024-07-20 18:09:09.620641] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.041 [2024-07-20 18:09:09.620665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.041 [2024-07-20 18:09:09.620681] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.041 [2024-07-20 18:09:09.624261] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.041 [2024-07-20 18:09:09.633535] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.041 [2024-07-20 18:09:09.634047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.041 [2024-07-20 18:09:09.634075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.041 [2024-07-20 18:09:09.634091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.041 [2024-07-20 18:09:09.634340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.041 [2024-07-20 18:09:09.634582] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.041 [2024-07-20 18:09:09.634606] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.041 [2024-07-20 18:09:09.634622] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.041 [2024-07-20 18:09:09.638201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.041 [2024-07-20 18:09:09.647481] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.041 [2024-07-20 18:09:09.648006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.041 [2024-07-20 18:09:09.648038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.041 [2024-07-20 18:09:09.648062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.041 [2024-07-20 18:09:09.648301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.041 [2024-07-20 18:09:09.648544] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.041 [2024-07-20 18:09:09.648567] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.041 [2024-07-20 18:09:09.648583] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.041 [2024-07-20 18:09:09.652160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.041 [2024-07-20 18:09:09.661428] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.041 [2024-07-20 18:09:09.661925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.041 [2024-07-20 18:09:09.661953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.041 [2024-07-20 18:09:09.661968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.041 [2024-07-20 18:09:09.662219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.041 [2024-07-20 18:09:09.662463] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.041 [2024-07-20 18:09:09.662487] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.041 [2024-07-20 18:09:09.662502] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.041 [2024-07-20 18:09:09.666080] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.041 [2024-07-20 18:09:09.675341] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.041 [2024-07-20 18:09:09.675876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.041 [2024-07-20 18:09:09.675908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.041 [2024-07-20 18:09:09.675926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.041 [2024-07-20 18:09:09.676163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.041 [2024-07-20 18:09:09.676406] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.041 [2024-07-20 18:09:09.676430] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.041 [2024-07-20 18:09:09.676446] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.041 [2024-07-20 18:09:09.679840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.041 [2024-07-20 18:09:09.689063] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.041 [2024-07-20 18:09:09.689597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.041 [2024-07-20 18:09:09.689629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.041 [2024-07-20 18:09:09.689647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.041 [2024-07-20 18:09:09.689897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.041 [2024-07-20 18:09:09.690140] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.041 [2024-07-20 18:09:09.690170] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.041 [2024-07-20 18:09:09.690187] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.041 [2024-07-20 18:09:09.693759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.041 [2024-07-20 18:09:09.703029] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.041 [2024-07-20 18:09:09.703554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.041 [2024-07-20 18:09:09.703595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.041 [2024-07-20 18:09:09.703611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.041 [2024-07-20 18:09:09.703877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.041 [2024-07-20 18:09:09.704122] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.041 [2024-07-20 18:09:09.704146] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.041 [2024-07-20 18:09:09.704161] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.041 [2024-07-20 18:09:09.707722] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.041 [2024-07-20 18:09:09.716973] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.041 [2024-07-20 18:09:09.717498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.041 [2024-07-20 18:09:09.717529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.041 [2024-07-20 18:09:09.717547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.041 [2024-07-20 18:09:09.717784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.041 [2024-07-20 18:09:09.718039] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.041 [2024-07-20 18:09:09.718063] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.041 [2024-07-20 18:09:09.718079] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.041 [2024-07-20 18:09:09.721645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.041 [2024-07-20 18:09:09.730925] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.041 [2024-07-20 18:09:09.731455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.041 [2024-07-20 18:09:09.731495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.042 [2024-07-20 18:09:09.731511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.042 [2024-07-20 18:09:09.731760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.042 [2024-07-20 18:09:09.732000] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.042 [2024-07-20 18:09:09.732020] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.042 [2024-07-20 18:09:09.732034] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.042 [2024-07-20 18:09:09.735588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.042 [2024-07-20 18:09:09.744863] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.042 [2024-07-20 18:09:09.745363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.042 [2024-07-20 18:09:09.745394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.042 [2024-07-20 18:09:09.745411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.042 [2024-07-20 18:09:09.745649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.042 [2024-07-20 18:09:09.745905] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.042 [2024-07-20 18:09:09.745929] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.042 [2024-07-20 18:09:09.745946] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.042 [2024-07-20 18:09:09.749508] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.042 [2024-07-20 18:09:09.758768] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.042 [2024-07-20 18:09:09.759338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.042 [2024-07-20 18:09:09.759389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.042 [2024-07-20 18:09:09.759422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.042 [2024-07-20 18:09:09.759668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.042 [2024-07-20 18:09:09.759927] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.042 [2024-07-20 18:09:09.759953] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.042 [2024-07-20 18:09:09.759969] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.042 [2024-07-20 18:09:09.763538] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.042 [2024-07-20 18:09:09.772814] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.042 [2024-07-20 18:09:09.773352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.042 [2024-07-20 18:09:09.773393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.042 [2024-07-20 18:09:09.773409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.042 [2024-07-20 18:09:09.773657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.042 [2024-07-20 18:09:09.773912] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.042 [2024-07-20 18:09:09.773937] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.042 [2024-07-20 18:09:09.773953] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.042 [2024-07-20 18:09:09.777520] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.042 [2024-07-20 18:09:09.786788] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.042 [2024-07-20 18:09:09.787315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.042 [2024-07-20 18:09:09.787347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.042 [2024-07-20 18:09:09.787366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.042 [2024-07-20 18:09:09.787614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.042 [2024-07-20 18:09:09.787872] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.042 [2024-07-20 18:09:09.787897] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.042 [2024-07-20 18:09:09.787913] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.042 [2024-07-20 18:09:09.791476] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.042 [2024-07-20 18:09:09.800740] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.042 [2024-07-20 18:09:09.801364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.042 [2024-07-20 18:09:09.801413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.042 [2024-07-20 18:09:09.801432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.042 [2024-07-20 18:09:09.801670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.042 [2024-07-20 18:09:09.801926] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.042 [2024-07-20 18:09:09.801951] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.042 [2024-07-20 18:09:09.801967] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.042 [2024-07-20 18:09:09.805532] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.042 [2024-07-20 18:09:09.814579] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.042 [2024-07-20 18:09:09.815072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.042 [2024-07-20 18:09:09.815114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.042 [2024-07-20 18:09:09.815130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.042 [2024-07-20 18:09:09.815388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.042 [2024-07-20 18:09:09.815632] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.042 [2024-07-20 18:09:09.815655] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.042 [2024-07-20 18:09:09.815671] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.042 [2024-07-20 18:09:09.819249] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.042 [2024-07-20 18:09:09.828509] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.042 [2024-07-20 18:09:09.829209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.042 [2024-07-20 18:09:09.829262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.042 [2024-07-20 18:09:09.829280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.042 [2024-07-20 18:09:09.829518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.042 [2024-07-20 18:09:09.829761] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.042 [2024-07-20 18:09:09.829785] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.042 [2024-07-20 18:09:09.829819] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.042 [2024-07-20 18:09:09.833374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.301 [2024-07-20 18:09:09.842436] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.301 [2024-07-20 18:09:09.842960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.301 [2024-07-20 18:09:09.842989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.301 [2024-07-20 18:09:09.843005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.301 [2024-07-20 18:09:09.843254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.301 [2024-07-20 18:09:09.843498] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.301 [2024-07-20 18:09:09.843522] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.301 [2024-07-20 18:09:09.843538] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.301 [2024-07-20 18:09:09.847121] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.301 [2024-07-20 18:09:09.856387] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.301 [2024-07-20 18:09:09.856890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.301 [2024-07-20 18:09:09.856922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.301 [2024-07-20 18:09:09.856940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.301 [2024-07-20 18:09:09.857178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.301 [2024-07-20 18:09:09.857421] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.301 [2024-07-20 18:09:09.857445] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.301 [2024-07-20 18:09:09.857460] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.301 [2024-07-20 18:09:09.861040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.301 [2024-07-20 18:09:09.870319] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.301 [2024-07-20 18:09:09.870853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.301 [2024-07-20 18:09:09.870895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.301 [2024-07-20 18:09:09.870912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.301 [2024-07-20 18:09:09.871168] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.301 [2024-07-20 18:09:09.871411] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.301 [2024-07-20 18:09:09.871435] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.301 [2024-07-20 18:09:09.871452] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.301 [2024-07-20 18:09:09.875035] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.301 [2024-07-20 18:09:09.884298] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.301 [2024-07-20 18:09:09.884841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.301 [2024-07-20 18:09:09.884887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.301 [2024-07-20 18:09:09.884904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.301 [2024-07-20 18:09:09.885165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:09.885408] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:09.885432] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:09.885447] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:09.889028] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.302 [2024-07-20 18:09:09.898308] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.302 [2024-07-20 18:09:09.898835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.302 [2024-07-20 18:09:09.898866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.302 [2024-07-20 18:09:09.898884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.302 [2024-07-20 18:09:09.899122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:09.899365] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:09.899389] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:09.899404] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:09.902985] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.302 [2024-07-20 18:09:09.912252] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.302 [2024-07-20 18:09:09.912753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.302 [2024-07-20 18:09:09.912784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.302 [2024-07-20 18:09:09.912813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.302 [2024-07-20 18:09:09.913057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:09.913318] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:09.913342] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:09.913358] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:09.916930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.302 [2024-07-20 18:09:09.926187] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.302 [2024-07-20 18:09:09.926708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.302 [2024-07-20 18:09:09.926748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.302 [2024-07-20 18:09:09.926764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.302 [2024-07-20 18:09:09.927051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:09.927306] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:09.927331] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:09.927347] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:09.930777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.302 [2024-07-20 18:09:09.939847] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.302 [2024-07-20 18:09:09.940333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.302 [2024-07-20 18:09:09.940364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.302 [2024-07-20 18:09:09.940383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.302 [2024-07-20 18:09:09.940621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:09.940875] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:09.940900] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:09.940916] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:09.944481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.302 [2024-07-20 18:09:09.953741] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.302 [2024-07-20 18:09:09.954361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.302 [2024-07-20 18:09:09.954405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.302 [2024-07-20 18:09:09.954426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.302 [2024-07-20 18:09:09.954671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:09.954932] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:09.954957] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:09.954974] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:09.958544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.302 [2024-07-20 18:09:09.967606] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.302 [2024-07-20 18:09:09.968188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.302 [2024-07-20 18:09:09.968239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.302 [2024-07-20 18:09:09.968258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.302 [2024-07-20 18:09:09.968496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:09.968739] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:09.968763] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:09.968778] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:09.972347] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.302 [2024-07-20 18:09:09.981609] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.302 [2024-07-20 18:09:09.982264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.302 [2024-07-20 18:09:09.982315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.302 [2024-07-20 18:09:09.982334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.302 [2024-07-20 18:09:09.982572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:09.982826] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:09.982851] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:09.982867] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:09.986429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.302 [2024-07-20 18:09:09.995556] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.302 [2024-07-20 18:09:09.996104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.302 [2024-07-20 18:09:09.996136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.302 [2024-07-20 18:09:09.996155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.302 [2024-07-20 18:09:09.996393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:09.996635] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:09.996659] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:09.996675] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:10.000253] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.302 [2024-07-20 18:09:10.010353] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.302 [2024-07-20 18:09:10.011269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.302 [2024-07-20 18:09:10.011311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.302 [2024-07-20 18:09:10.011339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.302 [2024-07-20 18:09:10.011599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:10.011894] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:10.011920] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:10.011938] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:10.015509] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.302 [2024-07-20 18:09:10.024354] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.302 [2024-07-20 18:09:10.024897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.302 [2024-07-20 18:09:10.024930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.302 [2024-07-20 18:09:10.024959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.302 [2024-07-20 18:09:10.025199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:10.025443] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:10.025467] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:10.025483] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:10.028698] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.302 [2024-07-20 18:09:10.038360] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.302 [2024-07-20 18:09:10.038935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.302 [2024-07-20 18:09:10.038969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.302 [2024-07-20 18:09:10.038988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.302 [2024-07-20 18:09:10.039227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:10.039471] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:10.039495] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:10.039510] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:10.043085] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.302 [2024-07-20 18:09:10.052183] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.302 [2024-07-20 18:09:10.052652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.302 [2024-07-20 18:09:10.052680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.302 [2024-07-20 18:09:10.052696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.302 [2024-07-20 18:09:10.052932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:10.053159] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:10.053179] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:10.053194] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:10.056298] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.302 [2024-07-20 18:09:10.065973] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.302 [2024-07-20 18:09:10.066453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.302 [2024-07-20 18:09:10.066481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.302 [2024-07-20 18:09:10.066498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.302 [2024-07-20 18:09:10.066747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.302 [2024-07-20 18:09:10.066984] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.302 [2024-07-20 18:09:10.067013] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.302 [2024-07-20 18:09:10.067028] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.302 [2024-07-20 18:09:10.070538] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.303 [2024-07-20 18:09:10.080014] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.303 [2024-07-20 18:09:10.080547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.303 [2024-07-20 18:09:10.080578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.303 [2024-07-20 18:09:10.080596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.303 [2024-07-20 18:09:10.080847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.303 [2024-07-20 18:09:10.081091] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.303 [2024-07-20 18:09:10.081115] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.303 [2024-07-20 18:09:10.081130] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.303 [2024-07-20 18:09:10.084695] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.303 [2024-07-20 18:09:10.093965] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.303 [2024-07-20 18:09:10.094468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.303 [2024-07-20 18:09:10.094499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.303 [2024-07-20 18:09:10.094517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.303 [2024-07-20 18:09:10.094755] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.303 [2024-07-20 18:09:10.094996] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.303 [2024-07-20 18:09:10.095017] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.303 [2024-07-20 18:09:10.095031] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.562 [2024-07-20 18:09:10.098607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.562 [2024-07-20 18:09:10.107878] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.562 [2024-07-20 18:09:10.108389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.562 [2024-07-20 18:09:10.108421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.562 [2024-07-20 18:09:10.108439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.562 [2024-07-20 18:09:10.108678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.562 [2024-07-20 18:09:10.108932] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.562 [2024-07-20 18:09:10.108957] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.562 [2024-07-20 18:09:10.108973] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.562 [2024-07-20 18:09:10.112542] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.562 [2024-07-20 18:09:10.121822] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.562 [2024-07-20 18:09:10.122351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.562 [2024-07-20 18:09:10.122382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.562 [2024-07-20 18:09:10.122400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.562 [2024-07-20 18:09:10.122638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.562 [2024-07-20 18:09:10.122894] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.562 [2024-07-20 18:09:10.122919] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.562 [2024-07-20 18:09:10.122935] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.562 [2024-07-20 18:09:10.126504] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.562 [2024-07-20 18:09:10.135806] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.562 [2024-07-20 18:09:10.136332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.562 [2024-07-20 18:09:10.136358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.562 [2024-07-20 18:09:10.136388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.562 [2024-07-20 18:09:10.136616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.562 [2024-07-20 18:09:10.136871] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.562 [2024-07-20 18:09:10.136896] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.562 [2024-07-20 18:09:10.136912] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.562 [2024-07-20 18:09:10.140506] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1099032 Killed "${NVMF_APP[@]}" "$@" 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1100498 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1100498 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1100498 ']' 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.562 [2024-07-20 18:09:10.149649] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:35.562 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:35.562 [2024-07-20 18:09:10.150174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.562 [2024-07-20 18:09:10.150205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.562 [2024-07-20 18:09:10.150223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.562 [2024-07-20 18:09:10.150462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.562 [2024-07-20 18:09:10.150705] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.150730] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.150745] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.154379] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.163536] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.164043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.164087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.164106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.164336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.164569] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.164592] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.164607] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.168127] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.177141] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.177598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.177627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.177644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.177896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.178129] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.178175] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.178189] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.181485] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.190312] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.190876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.190905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.190921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.191179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.191379] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.191399] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.191412] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.194397] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.194585] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:35.563 [2024-07-20 18:09:10.194642] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:35.563 [2024-07-20 18:09:10.203630] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.204215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.204257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.204274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.204507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.204705] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.204725] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.204738] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.207956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.216901] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.217351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.217392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.217409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.217640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.217849] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.217869] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.217882] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.220835] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.230194] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.230647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.230674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.230691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.563 [2024-07-20 18:09:10.230917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.231136] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.231156] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.231169] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.234230] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.244017] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.244573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.244617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.244637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.244886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.245087] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.245107] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.245120] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.248574] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.257872] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.258365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.258398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.258416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.258664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.258885] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.258907] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.258921] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.262414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.264526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:35.563 [2024-07-20 18:09:10.271864] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.272548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.272585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.272605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.272852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.273055] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.273082] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.273107] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.276572] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.285657] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.286390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.286443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.286466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.286734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.286990] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.287013] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.287029] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.290524] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.299603] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.300111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.300141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.300158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.300389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.300590] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.300609] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.300623] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.304061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.313509] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.314217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.314253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.314273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.314512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.314714] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.314734] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.314749] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.318294] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.327354] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.328036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.328091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.328113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.328352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.328554] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.328574] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.328589] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.332043] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.341299] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.341841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.341874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.341892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.342145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.342344] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.342364] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.342377] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.345850] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.563 [2024-07-20 18:09:10.355165] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.563 [2024-07-20 18:09:10.355610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.563 [2024-07-20 18:09:10.355640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.563 [2024-07-20 18:09:10.355657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.563 [2024-07-20 18:09:10.355881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.563 [2024-07-20 18:09:10.356131] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.563 [2024-07-20 18:09:10.356151] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.563 [2024-07-20 18:09:10.356145] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:35.563 [2024-07-20 18:09:10.356165] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.563 [2024-07-20 18:09:10.356176] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:35.563 [2024-07-20 18:09:10.356189] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:35.563 [2024-07-20 18:09:10.356200] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:35.564 [2024-07-20 18:09:10.356211] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:35.564 [2024-07-20 18:09:10.356438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:35.564 [2024-07-20 18:09:10.356626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:35.564 [2024-07-20 18:09:10.356633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.822 [2024-07-20 18:09:10.359423] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 [2024-07-20 18:09:10.368696] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.369352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.369393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.369412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.369649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.369875] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.369897] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.369913] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 [2024-07-20 18:09:10.373079] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 [2024-07-20 18:09:10.382238] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.382957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.383014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.383034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.383273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.383505] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.383526] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.383542] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 [2024-07-20 18:09:10.386667] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 [2024-07-20 18:09:10.395842] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.396524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.396569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.396588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.396834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.397051] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.397073] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.397089] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 [2024-07-20 18:09:10.400248] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 [2024-07-20 18:09:10.409376] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.410127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.410200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.410234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.410455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.410670] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.410692] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.410708] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 [2024-07-20 18:09:10.413885] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 [2024-07-20 18:09:10.422969] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.423680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.423752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.423772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.424029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.424263] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.424285] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.424302] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 [2024-07-20 18:09:10.427463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 [2024-07-20 18:09:10.436675] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.437321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.437364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.437384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.437607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.437858] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.437880] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.437898] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 [2024-07-20 18:09:10.441179] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 [2024-07-20 18:09:10.450315] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.450799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.450828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.450846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.451062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.451301] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.451322] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.451336] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 [2024-07-20 18:09:10.454499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 [2024-07-20 18:09:10.463915] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.464406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.464435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.464452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.464666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.464894] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.464916] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.464930] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 [2024-07-20 18:09:10.468186] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:35.823 [2024-07-20 18:09:10.477434] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.477906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.477935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.477951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.478178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.478391] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.478412] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.478426] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 [2024-07-20 18:09:10.481652] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 [2024-07-20 18:09:10.491017] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.491456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.491485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.491501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.491728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.491974] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.491997] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.492012] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 [2024-07-20 18:09:10.495306] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:35.823 [2024-07-20 18:09:10.503493] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:35.823 [2024-07-20 18:09:10.504605] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.505098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.505126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.505143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.505357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.505585] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.505606] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.505620] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 [2024-07-20 18:09:10.508899] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:35.823 [2024-07-20 18:09:10.518238] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.518751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.518800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.518819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.519033] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.519284] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.519304] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.519318] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 [2024-07-20 18:09:10.522525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 [2024-07-20 18:09:10.531710] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.532226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.532256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.532280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.532508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.532721] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.532742] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.532757] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 [2024-07-20 18:09:10.535985] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 [2024-07-20 18:09:10.545277] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.545945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.823 [2024-07-20 18:09:10.545983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.823 [2024-07-20 18:09:10.546002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.823 [2024-07-20 18:09:10.546234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.823 [2024-07-20 18:09:10.546450] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.823 [2024-07-20 18:09:10.546471] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.823 [2024-07-20 18:09:10.546487] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.823 Malloc0 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:35.823 [2024-07-20 18:09:10.549737] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.823 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:35.823 [2024-07-20 18:09:10.558831] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.823 [2024-07-20 18:09:10.559369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:35.824 [2024-07-20 18:09:10.559396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48e70 with addr=10.0.0.2, port=4420 00:33:35.824 [2024-07-20 18:09:10.559413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48e70 is same with the state(5) to be set 00:33:35.824 [2024-07-20 18:09:10.559627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48e70 (9): Bad file descriptor 00:33:35.824 [2024-07-20 18:09:10.559884] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:35.824 [2024-07-20 18:09:10.559906] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:35.824 [2024-07-20 18:09:10.559921] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.824 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.824 18:09:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:35.824 [2024-07-20 18:09:10.563212] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:35.824 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.824 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:35.824 [2024-07-20 18:09:10.566928] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.824 18:09:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.824 18:09:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1099318 00:33:35.824 [2024-07-20 18:09:10.572515] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:36.080 [2024-07-20 18:09:10.643920] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:46.047 00:33:46.047 Latency(us) 00:33:46.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.047 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:46.047 Verification LBA range: start 0x0 length 0x4000 00:33:46.047 Nvme1n1 : 15.01 6997.99 27.34 8665.19 0.00 8146.21 1432.08 20680.25 00:33:46.047 =================================================================================================================== 00:33:46.047 Total : 6997.99 27.34 8665.19 0.00 8146.21 1432.08 20680.25 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:46.047 rmmod nvme_tcp 00:33:46.047 rmmod nvme_fabrics 00:33:46.047 rmmod nvme_keyring 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1100498 ']' 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1100498 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 1100498 ']' 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 1100498 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1100498 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1100498' 00:33:46.047 killing process with pid 1100498 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 1100498 00:33:46.047 18:09:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 1100498 00:33:46.047 18:09:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:46.047 18:09:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:46.047 18:09:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:46.047 18:09:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:46.047 18:09:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:46.047 18:09:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.047 18:09:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:46.047 18:09:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.418 18:09:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:47.418 00:33:47.418 real 0m22.232s 00:33:47.418 user 1m0.049s 00:33:47.418 sys 0m4.054s 00:33:47.418 18:09:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:47.418 18:09:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.418 ************************************ 00:33:47.418 END TEST nvmf_bdevperf 00:33:47.418 ************************************ 00:33:47.675 18:09:22 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:47.675 18:09:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:47.675 18:09:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:47.675 18:09:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:47.675 ************************************ 00:33:47.675 START TEST nvmf_target_disconnect 00:33:47.675 ************************************ 00:33:47.675 18:09:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:47.675 * Looking for test storage... 00:33:47.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:47.675 18:09:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:47.675 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:47.675 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.675 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.675 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.675 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.675 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:47.675 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:47.675 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.675 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:47.675 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.675 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:47.676 18:09:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:49.574 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:49.574 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:49.574 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:49.574 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:49.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:49.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:33:49.574 00:33:49.574 --- 10.0.0.2 ping statistics --- 00:33:49.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.574 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:49.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:49.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:33:49.574 00:33:49.574 --- 10.0.0.1 ping statistics --- 00:33:49.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.574 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:49.574 ************************************ 00:33:49.574 START TEST nvmf_target_disconnect_tc1 00:33:49.574 ************************************ 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:49.574 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:49.574 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.574 [2024-07-20 18:09:24.365499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.574 [2024-07-20 18:09:24.365567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240e520 with addr=10.0.0.2, port=4420 00:33:49.574 [2024-07-20 18:09:24.365613] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:49.574 [2024-07-20 18:09:24.365636] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:49.574 [2024-07-20 18:09:24.365652] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:49.574 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:49.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:49.833 Initializing NVMe Controllers 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:49.833 00:33:49.833 real 0m0.091s 00:33:49.833 user 0m0.035s 00:33:49.833 sys 0m0.055s 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:49.833 ************************************ 00:33:49.833 END TEST nvmf_target_disconnect_tc1 00:33:49.833 ************************************ 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:49.833 ************************************ 00:33:49.833 START TEST nvmf_target_disconnect_tc2 00:33:49.833 ************************************ 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1103598 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1103598 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1103598 ']' 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:49.833 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:49.833 [2024-07-20 18:09:24.466282] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:49.833 [2024-07-20 18:09:24.466354] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.833 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.833 [2024-07-20 18:09:24.529837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:49.833 [2024-07-20 18:09:24.616791] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:49.833 [2024-07-20 18:09:24.616850] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:49.833 [2024-07-20 18:09:24.616880] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:49.833 [2024-07-20 18:09:24.616893] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:49.833 [2024-07-20 18:09:24.616903] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:49.833 [2024-07-20 18:09:24.616991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:49.833 [2024-07-20 18:09:24.617054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:49.833 [2024-07-20 18:09:24.617109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:49.833 [2024-07-20 18:09:24.617111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.153 Malloc0 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.153 [2024-07-20 18:09:24.777502] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.153 [2024-07-20 18:09:24.805740] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1103661 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:50.153 18:09:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:50.153 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.057 18:09:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1103598 00:33:52.057 18:09:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 [2024-07-20 18:09:26.830997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 [2024-07-20 18:09:26.831313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Write completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 [2024-07-20 18:09:26.831620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.057 starting I/O failed 00:33:52.057 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Write completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Write completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Write completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Write completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Write completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Write completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Read completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Write completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 Write completed with error (sct=0, sc=8) 00:33:52.058 starting I/O failed 00:33:52.058 [2024-07-20 18:09:26.831928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.058 [2024-07-20 18:09:26.832173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.832223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.832457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.832486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.832869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.832896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.833112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.833137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.833345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.833370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.833854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.833880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.834116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.834141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.834354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.834379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.834646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.834688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.834963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.834989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.835243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.835268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.835837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.835881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.836100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.836128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.836409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.836437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.836789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.836840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.837061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.837086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.837315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.837354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.837608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.837635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.837859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.837886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.838129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.838157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.838460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.838485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.838898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.838924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.839204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.839232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.839714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.839763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.840066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.840091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.840378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.840407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.840938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.840964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.841184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.841224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.841471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.841498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.841913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.841939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.842185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.058 [2024-07-20 18:09:26.842209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.058 qpair failed and we were unable to recover it. 00:33:52.058 [2024-07-20 18:09:26.842448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.842473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.842709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.842734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.842963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.842989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.843234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.843262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.843565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.843606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.843873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.843899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.844113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.844139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.844381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.844423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.844885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.844911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.845130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.845155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.845392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.845431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.845859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.845885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.846151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.846175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.846392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.846418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.846632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.846665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.847086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.847121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.847365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.847392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.847638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.847664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.847887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.847913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.848130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.848169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.848367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.848391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.848713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.848751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.849021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.849049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.849346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.849385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.849683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.849712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.850013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.850040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.850335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.850361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.059 [2024-07-20 18:09:26.850581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.059 [2024-07-20 18:09:26.850606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.059 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.850865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.850892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.851174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.851214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.851477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.851505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.851764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.851790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.852062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.852088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.852302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.852327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.852564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.852589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.852863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.852889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.853164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.853190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.853404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.853431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.853670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.853696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.853903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.853929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.854167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.854193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.854515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.854565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.854841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.854867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.855080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.855105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.855370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.855398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.855638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.855667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.855953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.855979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.856250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.856283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.856561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.856587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.856827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.856854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.857063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.857089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.857333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.857359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.857597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.857622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.857895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.857921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.858192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.858217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.858433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.858459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.858677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.858703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.859006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.859032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.859277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.859302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.859595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.859620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.859858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.859883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.860125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.860150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.860437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.860462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.860741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.860769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.330 qpair failed and we were unable to recover it. 00:33:52.330 [2024-07-20 18:09:26.861001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.330 [2024-07-20 18:09:26.861027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.861292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.861317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.861614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.861640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.861862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.861888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.862131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.862156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.862390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.862416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.862632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.862658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.862940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.862966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.863205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.863235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.863517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.863542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.863827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.863856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.864152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.864177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.864413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.864440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.864710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.864739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.865000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.865029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.865292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.865318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.865589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.865616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.865879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.865909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.866191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.866216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.866464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.866493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.866757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.866785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.867025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.867050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.867338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.867367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.867631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.867660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.867938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.867964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.868274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.868299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.868568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.868597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.868854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.868879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.869163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.869191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.869433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.869460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.869725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.869750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.869989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.870015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.870259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.870289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.870534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.870561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.870836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.870865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.871166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.871192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.871452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.871478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.871751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.871778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.872086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.872115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.872374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.872399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.331 [2024-07-20 18:09:26.872649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.331 [2024-07-20 18:09:26.872677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.331 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.872972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.872998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.873236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.873262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.873531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.873556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.873827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.873857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.874088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.874114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.874330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.874355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.874634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.874662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.874899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.874924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.875189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.875217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.875462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.875490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.875728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.875753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.876026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.876052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.876288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.876313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.876542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.876567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.876766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.876798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.877056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.877084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.877343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.877368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.877638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.877663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.877954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.877982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.878248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.878274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.878491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.878516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.878730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.878756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.878999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.879031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.879344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.879370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.879653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.879682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.879931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.879958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.880202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.880231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.880520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.880548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.880812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.880839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.881112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.881141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.881372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.881401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.881636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.881663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.881935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.881964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.882226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.882255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.882488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.882514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.882811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.882838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.883061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.883104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.883387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.883412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.883707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.332 [2024-07-20 18:09:26.883732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.332 qpair failed and we were unable to recover it. 00:33:52.332 [2024-07-20 18:09:26.883994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.884020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.884291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.884316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.884588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.884615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.884922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.884951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.885217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.885242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.885459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.885484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.885721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.885747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.885991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.886016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.886254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.886279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.886492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.886518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.886782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.886815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.887055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.887082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.887289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.887314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.887524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.887550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.887818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.887847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.888147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.888175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.888441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.888467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.888673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.888698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.889002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.889028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.889273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.889298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.889509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.889536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.889777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.889807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.890057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.890083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.890341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.890374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.890655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.890683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.890947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.890973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.891248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.891272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.891529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.891557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.891851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.891877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.892178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.892203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.892437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.892463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.892691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.892717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.892932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.892958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.893194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.893220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.893483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.893508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.893752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.893779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.894029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.894057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.894317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.894342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.894622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.894651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.894913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.894942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.333 [2024-07-20 18:09:26.895185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.333 [2024-07-20 18:09:26.895210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.333 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.895505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.895533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.895819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.895848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.896100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.896125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.896414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.896442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.896669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.896697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.896953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.896978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.897214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.897242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.897464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.897493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.897755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.897780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.898055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.898101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.898335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.898363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.898592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.898617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.898899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.898929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.899160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.899189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.899441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.899467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.899758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.899783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.900037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.900066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.900328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.900354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.900596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.900626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.900868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.900897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.901150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.901175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.901422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.901451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.901737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.901767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.902045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.902070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.902300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.902343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.902603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.902632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.902852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.902877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.903116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.903145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.903412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.903440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.903721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.903747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.904012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.904038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.904285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.904310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.334 qpair failed and we were unable to recover it. 00:33:52.334 [2024-07-20 18:09:26.904518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.334 [2024-07-20 18:09:26.904543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.904780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.904812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.905051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.905079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.905361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.905387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.905676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.905704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.905973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.906002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.906263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.906289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.906545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.906571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.906919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.906944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.907246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.907271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.907538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.907566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.907813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.907842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.908112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.908138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.908525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.908577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.908867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.908893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.909137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.909163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.909386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.909411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.909691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.909724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.910022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.910065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.910306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.910335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.910600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.910625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.910895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.910921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.911176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.911204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.911441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.911469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.911735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.911760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.912144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.912182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.912463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.912496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.912761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.912791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.913083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.913109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.913469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.913494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.913745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.913772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.914101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.914142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.914411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.914436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.914683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.914709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.914946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.914973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.915231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.915256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.915646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.915704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.915993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.916022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.916297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.916326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.916584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.916610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.916880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.335 [2024-07-20 18:09:26.916909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.335 qpair failed and we were unable to recover it. 00:33:52.335 [2024-07-20 18:09:26.917172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.917198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.917421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.917447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.917696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.917726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.917994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.918021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.918253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.918279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.918527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.918552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.918818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.918843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.919102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.919129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.919457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.919483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.919763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.919791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.920053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.920080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.920360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.920388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.920650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.920679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.920973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.920999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.921280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.921310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.921563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.921591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.921953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.921987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.922221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.922249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.922515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.922543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.922829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.922856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.923078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.923102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.923365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.923394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.923674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.923699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.923977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.924003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.924384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.924433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.924776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.924807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.925018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.925044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.925260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.925286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.925492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.925518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.925788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.925823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.926096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.926125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.926384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.926409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.926667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.926692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.926925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.926951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.927189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.927214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.927439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.927464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.927733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.927761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.928050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.928076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.928382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.928410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.928706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.928732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.336 [2024-07-20 18:09:26.929023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.336 [2024-07-20 18:09:26.929049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.336 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.929351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.929379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.929643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.929673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.930041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.930070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.930341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.930370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.930638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.930663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.930939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.930966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.931244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.931270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.931548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.931576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.931862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.931888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.932149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.932178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.932431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.932456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.932695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.932720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.932996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.933024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.933287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.933315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.933539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.933565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.933781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.933817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.934078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.934106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.934404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.934429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.934732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.934762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.935047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.935073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.935382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.935422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.935727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.935752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.936023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.936049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.936282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.936307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.936578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.936606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.936848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.936873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.937159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.937200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.937486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.937514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.937774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.937806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.938140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.938165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.938435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.938460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.938866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.938894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.939143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.939168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.939440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.939468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.939729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.939758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.940018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.940044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.940366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.940443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.940726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.940754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.941004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.941030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.337 [2024-07-20 18:09:26.941269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.337 [2024-07-20 18:09:26.941295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.337 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.941563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.941591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.941875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.941902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.942189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.942219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.942476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.942506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.942767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.942798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.943163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.943206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.943482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.943513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.943848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.943902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.944199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.944224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.944443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.944468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.944724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.944748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.944991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.945019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.945323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.945351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.945616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.945642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.945903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.945933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.946217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.946248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.946534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.946574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.946875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.946901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.947186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.947214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.947478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.947504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.947731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.947759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.948030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.948058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.948325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.948350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.948609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.948639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.948929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.948958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.949189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.949215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.949448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.949476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.949742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.949772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.950154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.950183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.950450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.950479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.950741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.950769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.951031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.951058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.951306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.951335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.951604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.951633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.338 [2024-07-20 18:09:26.951887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.338 [2024-07-20 18:09:26.951914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.338 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.952129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.952154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.952389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.952428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.952669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.952693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.952981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.953009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.953299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.953327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.953617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.953642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.953950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.953975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.954196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.954238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.954504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.954530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.954811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.954839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.955114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.955138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.955513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.955582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.955866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.955896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.956162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.956188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.956518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.956557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.956804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.956830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.957060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.957085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.957316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.957341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.957587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.957613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.957925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.957951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.958194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.958224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.958499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.958528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.958789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.958824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.959113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.959139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.959408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.959436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.959676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.959703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.959958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.959984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.960279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.960308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.960584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.960608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.960859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.960886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.961180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.961208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.961473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.961503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.961779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.961833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.962115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.962144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.962445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.962474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.962737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.962761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.963029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.963055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.963374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.963402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.963816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.963883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.964170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.339 [2024-07-20 18:09:26.964198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.339 qpair failed and we were unable to recover it. 00:33:52.339 [2024-07-20 18:09:26.964449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.964477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.964750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.964775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.965070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.965096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.965390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.965419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.965673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.965697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.965977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.966006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.966274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.966299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.966609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.966635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.966912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.966940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.967195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.967219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.967493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.967519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.967773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.967808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.968074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.968102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.968362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.968387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.968659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.968684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.968967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.968996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.969271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.969296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.969576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.969601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.969845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.969872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.970108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.970134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.970397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.970430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.970719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.970748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.971051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.971077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.971374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.971402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.971671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.971700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.971950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.971976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.972265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.972290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.972562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.972587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.972830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.972855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.973072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.973097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.973384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.973409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.973812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.973861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.974154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.974182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.974470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.974499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.974827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.974854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.975115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.975140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.975417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.975442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.975725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.975751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.976044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.976070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.976308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.976333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.976571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.340 [2024-07-20 18:09:26.976597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.340 qpair failed and we were unable to recover it. 00:33:52.340 [2024-07-20 18:09:26.976886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.976911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.977192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.977220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.977523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.977549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.977839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.977868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.978157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.978185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.978471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.978497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.978756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.978785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.979067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.979095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.979437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.979477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.979775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.979810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.980092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.980117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.980410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.980435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.980689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.980719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.981026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.981052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.981343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.981384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.981651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.981680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.981942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.981971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.982229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.982255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.982527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.982556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.982803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.982837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.983107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.983132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.983413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.983439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.983707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.983733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.983987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.984012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.984276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.984303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.984531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.984557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.984803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.984829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.985101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.985130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.985395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.985423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.985710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.985736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.986020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.986047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.986347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.986372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.986617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.986643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.986951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.986977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.987243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.987273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.987543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.987569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.987806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.987833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.988055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.988081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.988326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.988352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.988645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.988671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.341 [2024-07-20 18:09:26.988943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.341 [2024-07-20 18:09:26.988972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.341 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.989226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.989253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.989556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.989582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.989850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.989878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.990135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.990160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.990456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.990481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.990720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.990749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.991011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.991038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.991305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.991331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.991601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.991626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.991881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.991909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.992167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.992197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.992464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.992492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.992774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.992805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.993080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.993106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.993311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.993337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.993603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.993629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.993919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.993948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.994213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.994239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.994498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.994528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.994809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.994835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.995049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.995091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.995332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.995358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.995632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.995657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.995950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.995979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.996270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.996296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.996565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.996593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.996864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.996890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.997154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.997179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.997484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.997512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.997772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.997821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.998118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.998143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.998411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.998439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.998712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.998740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.998999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.999025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.999269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.999298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.999529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.999559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:26.999845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:26.999871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:27.000092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.342 [2024-07-20 18:09:27.000118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.342 qpair failed and we were unable to recover it. 00:33:52.342 [2024-07-20 18:09:27.000336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.000361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.000625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.000651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.000938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.000963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.001229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.001257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.001516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.001541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.001822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.001853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.002139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.002165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.002416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.002442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.002657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.002683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.002953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.002982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.003236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.003262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.003569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.003597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.003845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.003871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.004111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.004137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.004355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.004381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.004646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.004676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.004973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.004999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.005341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.005369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.005636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.005664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.005927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.005953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.006242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.006272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.006531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.006559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.006813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.006839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.007142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.007170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.007400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.007425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.007689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.007715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.007986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.008012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.008292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.008321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.008603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.008628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.008875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.008904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.009194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.009223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.009537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.009561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.009790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.009831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.010096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.010124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.010391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.010418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.010694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.010723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.011012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.343 [2024-07-20 18:09:27.011038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.343 qpair failed and we were unable to recover it. 00:33:52.343 [2024-07-20 18:09:27.011295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.011320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.011581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.011610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.011875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.011902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.012214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.012240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.012547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.012572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.012880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.012906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.013159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.013182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.013491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.013520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.013777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.013812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.014069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.014095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.014391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.014416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.014706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.014734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.015029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.015055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.015343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.015367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.015633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.015662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.015950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.015976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.016244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.016272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.016553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.016580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.016823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.016855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.017120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.017148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.017410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.017439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.017670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.017695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.017931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.017971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.018228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.018261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.018516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.018542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.018827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.018856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.019113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.019143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.019421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.019447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.019751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.019779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.020082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.020110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.020364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.020389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.020602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.020643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.020887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.020916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.021152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.021178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.021434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.021462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.021727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.021756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.022047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.022073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.022387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.022412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.022723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.022823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.023091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.023116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.344 [2024-07-20 18:09:27.023403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.344 [2024-07-20 18:09:27.023430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.344 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.023709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.023733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.024028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.024054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.024396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.024421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.024686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.024714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.025002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.025029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.025329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.025357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.025646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.025675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.025962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.025988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.026231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.026256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.026497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.026523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.026752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.026778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.027027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.027053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.027345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.027370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.027607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.027634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.027905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.027934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.028202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.028228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.028485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.028510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.028779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.028819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.029075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.029103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.029329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.029354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.029664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.029693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.029931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.029961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.030224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.030255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.030538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.030564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.030873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.030900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.031165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.031191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.031431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.031457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.031700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.031725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.032009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.032052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.032286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.032312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.032522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.032562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.032809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.032835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.033100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.033128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.033426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.033451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.033712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.033737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.034001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.034028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.034326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.034352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.034590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.034616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.034890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.034920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.035165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.035194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.345 [2024-07-20 18:09:27.035433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.345 [2024-07-20 18:09:27.035461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.345 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.035747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.035776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.036073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.036099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.036365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.036390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.036633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.036658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.036901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.036927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.037150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.037176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.037431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.037459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.037908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.037937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.038176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.038203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.038463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.038492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.038757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.038785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.039099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.039124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.039406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.039432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.039673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.039699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.039973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.040000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.040299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.040327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.040587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.040615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.040853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.040879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.041090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.041132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.041376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.041405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.041670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.041696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.041993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.042026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.042261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.042289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.042578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.042605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.042886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.042915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.043199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.043227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.043518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.043543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.043822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.043851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.044098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.044123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.044323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.044348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.044580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.044610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.044840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.044870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.045131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.045157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.045421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.045453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.045749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.045774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.046071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.046097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.046412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.046451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.046873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.046901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.047185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.047210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.047497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.346 [2024-07-20 18:09:27.047526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.346 qpair failed and we were unable to recover it. 00:33:52.346 [2024-07-20 18:09:27.047820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.047849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.048149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.048174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.048449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.048474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.048733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.048757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.049049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.049075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.049346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.049372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.049658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.049686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.049952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.049978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.050222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.050248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.050538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.050564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.050825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.050852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.051064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.051089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.051302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.051344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.051602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.051628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.051903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.051929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.052178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.052206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.052463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.052490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.052748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.052776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.053017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.053043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.053312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.053338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.053610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.053639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.053936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.053966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.054209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.054235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.054487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.054515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.054812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.054841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.055108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.055134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.055438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.055466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.055703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.055731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.056003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.056029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.056303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.056330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.056611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.056639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.056923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.056964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.057249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.057274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.057541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.057570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.057826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.347 [2024-07-20 18:09:27.057852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.347 qpair failed and we were unable to recover it. 00:33:52.347 [2024-07-20 18:09:27.058109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.058137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.058425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.058454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.058690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.058716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.058984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.059010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.059220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.059247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.059493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.059519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.059817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.059843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.060133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.060162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.060422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.060448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.060719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.060749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.061019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.061045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.061285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.061310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.061542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.061568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.061901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.061927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.062166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.062192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.062484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.062512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.062758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.062783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.063034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.063059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.063300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.063330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.063568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.063597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.063930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.063956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.064195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.064225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.064492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.064520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.064760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.064785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.065078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.065107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.065385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.065414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.065699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.065725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.066005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.066032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.066290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.066318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.066581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.066607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.066935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.066964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.067255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.067283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.067561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.067586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.067869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.067898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.068130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.068158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.068454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.068494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.068770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.068805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.069076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.069104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.069360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.069384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.069630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.069657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.348 [2024-07-20 18:09:27.069898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.348 [2024-07-20 18:09:27.069924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.348 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.070165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.070190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.070478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.070507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.070765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.070801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.071096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.071122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.071457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.071485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.071759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.071785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.072182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.072220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.072530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.072561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.072851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.072880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.073109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.073135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.073399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.073429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.073765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.073790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.074005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.074036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.074260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.074303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.074579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.074607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.074862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.074888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.075107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.075148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.075514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.075540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.075818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.075845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.076080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.076105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.076453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.076481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.076719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.076744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.076993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.077020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.077292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.077320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.077581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.077606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.077908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.077937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.078238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.078267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.078550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.078576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.078866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.078896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.079159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.079189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.079453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.079478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.079894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.079925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.080214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.080242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.080538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.080564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.080830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.080860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.081098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.081126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.081369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.081396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.081670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.081695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.081962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.081989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.082267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.082296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.349 qpair failed and we were unable to recover it. 00:33:52.349 [2024-07-20 18:09:27.082578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.349 [2024-07-20 18:09:27.082604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.082917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.082943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.083222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.083251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.083535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.083561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.083776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.083809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.084050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.084076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.084314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.084339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.084628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.084667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.084890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.084915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.085151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.085176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.085459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.085487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.085779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.085812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.086029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.086059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.086293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.086318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.086551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.086577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.086825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.086851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.087105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.087135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.087490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.087556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.087805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.087834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.088095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.088121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.088336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.088361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.088616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.088640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.088951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.088992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.089249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.089277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.089548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.089573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.089779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.089812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.090069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.090097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.090385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.090414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.090769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.090828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.091089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.091115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.091360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.091386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.091680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.091708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.091974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.092003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.092276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.092301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.092543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.092569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.092788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.092824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.093104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.093132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.093371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.093396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.093611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.093653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.093931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.093957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.094225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.350 [2024-07-20 18:09:27.094251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.350 qpair failed and we were unable to recover it. 00:33:52.350 [2024-07-20 18:09:27.094537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.094579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.094812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.094843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.095099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.095129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.095439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.095464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.095767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.095800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.096081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.096106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.096345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.096375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.096637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.096666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.096950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.096977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.097221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.097246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.097486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.097512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.097762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.097806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.098109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.098133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.098402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.098428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.098744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.098821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.099085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.099111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.099373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.099397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.099677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.099706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.099969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.099997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.100242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.100269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.100536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.100561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.100867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.100893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.101153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.101181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.101474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.101499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.101739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.101764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.101994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.102021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.102255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.102281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.102497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.102523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.102823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.102868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.103167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.103193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.103434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.103477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.103742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.103770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.104033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.104059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.104363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.104391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.351 qpair failed and we were unable to recover it. 00:33:52.351 [2024-07-20 18:09:27.104685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.351 [2024-07-20 18:09:27.104710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.104954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.104980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.105222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.105248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.105515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.105540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.105845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.105874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.106132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.106158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.106421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.106447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.106746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.106774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.107071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.107115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.107416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.107441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.107682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.107707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.107953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.107979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.108283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.108311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.108574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.108602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.108847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.108873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.109093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.109118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.109324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.109349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.109586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.109616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.109856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.109882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.110126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.110150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.110388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.110413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.110650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.110676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.110892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.110918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.111154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.111179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.111449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.111478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.111711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.111740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.112001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.112028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.112274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.112300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.112559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.112588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.352 [2024-07-20 18:09:27.112851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.352 [2024-07-20 18:09:27.112880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.352 qpair failed and we were unable to recover it. 00:33:52.624 [2024-07-20 18:09:27.113121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.113147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.113405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.113432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.113672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.113699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.113966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.113996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.114256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.114281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.114510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.114535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.114818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.114860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.115098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.115123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.115357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.115383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.115657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.115682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.115947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.115972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.116187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.116213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.116473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.116499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.116727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.116753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.117052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.117081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.117350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.117378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.117640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.117665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.117932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.117958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.118174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.118200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.118449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.118474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.118714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.118739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.119013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.119042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.119273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.119302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.119590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.119615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.119899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.119925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.120154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.120183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.120429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.120457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.120924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.120955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.121235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.121261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.121498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.121523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.121827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.121856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.122144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.122173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.122438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.122465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.122726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.122752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.122969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.122996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.123238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.123264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.123481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.123507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.123742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.123767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.124014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.124040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.124269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.124295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.124505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.124530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.124780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.124819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.125109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.125138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.125413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.125439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.625 qpair failed and we were unable to recover it. 00:33:52.625 [2024-07-20 18:09:27.125687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.625 [2024-07-20 18:09:27.125712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.125948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.125975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.126241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.126269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.126537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.126563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.126826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.126852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.127105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.127130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.127342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.127368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.127638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.127667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.127902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.127927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.128164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.128192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.128441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.128467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.128731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.128756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.128980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.129007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.129245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.129271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.129528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.129556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.129822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.129866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.130140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.130166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.130381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.130406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.130645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.130670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.130950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.130976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.131214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.131240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.131454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.131480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.131740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.131768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.132026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.132056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.132261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.132287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.132551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.132581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.132850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.132877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.133120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.133145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.133378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.133404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.133645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.133673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.133960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.133989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.134252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.134279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.134523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.134549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.134785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.134818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.135058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.135083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.135322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.135349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.135595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.135620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.135865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.135892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.136131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.136156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.136402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.136427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.136660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.136686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.136960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.136991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.137252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.137280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.137546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.137571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.137778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.137815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.138088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.138118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.138605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.138656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.138927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.138953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.139191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.139216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.139445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.626 [2024-07-20 18:09:27.139470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.626 qpair failed and we were unable to recover it. 00:33:52.626 [2024-07-20 18:09:27.139715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.139741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.139977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.140004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.140255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.140281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.140542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.140568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.140804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.140846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.141081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.141110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.141370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.141395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.141687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.141715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.141985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.142011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.142258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.142284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.142498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.142523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.142806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.142837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.143105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.143134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.143362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.143409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.143633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.143658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.143911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.143937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.144228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.144256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.144521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.144549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.144887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.144914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.145174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.145199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.145432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.145457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.145696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.145725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.145966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.145993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.146254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.146282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.146571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.146596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.146840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.146866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.147126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.147152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.147460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.147488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.147810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.147836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.148069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.148094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.148318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.148343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.148544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.148569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.148774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.148810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.149055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.149081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.149352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.149393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.149633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.149658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.149876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.149901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.150142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.150173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.150425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.150450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.150667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.150692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.150992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.151032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.151330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.151356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.151635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.151662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.151996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.152037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.152285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.152311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.152523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.152548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.627 [2024-07-20 18:09:27.152782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.627 [2024-07-20 18:09:27.152815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.627 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.153176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.153220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.153765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.153826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.154087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.154113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.154359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.154385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.154600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.154626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.154872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.154899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.155143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.155168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.155403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.155429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.155675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.155700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.155942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.155967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.156213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.156243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.156477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.156502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.156807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.156856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.157125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.157150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.157387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.157413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.157622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.157647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.157957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.157983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.158266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.158294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.158571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.158600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.158843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.158870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.159109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.159138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.159449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.159478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.159716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.159744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.160005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.160031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.160300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.160325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.160598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.160661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.160906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.160932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.161147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.161172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.161411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.161436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.161705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.161730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.161975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.162001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.162201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.162226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.162425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.162450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.162659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.162701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.163000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.163026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.163245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.163271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.163509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.163535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.163804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.163830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.164045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.164087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.164325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.164350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.164585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.628 [2024-07-20 18:09:27.164611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.628 qpair failed and we were unable to recover it. 00:33:52.628 [2024-07-20 18:09:27.164855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.164881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.165099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.165124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.165365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.165390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.165804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.165855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.166137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.166162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.166377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.166402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.166616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.166646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.166861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.166888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.167125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.167151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.167363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.167389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.167605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.167631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.167843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.167869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.168122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.168150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.168414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.168444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.168688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.168714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.168988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.169014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.169252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.169278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.169511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.169536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.169857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.169883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.170167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.170195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.170467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.170493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.170733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.170759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.171005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.171031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.171328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.171356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.171608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.171633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.171874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.171902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.172132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.172158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.172379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.172404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.172633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.172658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.172875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.172901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.173114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.173140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.173371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.173396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.173623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.173648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.173940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.173970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.174188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.174214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.174416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.174441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.174682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.174708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.175000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.175029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.175291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.175316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.175561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.175591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.175835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.175862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.176075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.176101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.176317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.176342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.176617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.176661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.176878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.176905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.177191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.177216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.177436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.177461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.177729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.177758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.629 [2024-07-20 18:09:27.178017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.629 [2024-07-20 18:09:27.178043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.629 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.178251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.178278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.178488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.178513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.178721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.178746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.178959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.178985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.179227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.179252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.179492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.179518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.179761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.179787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.180038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.180063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.180346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.180374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.180642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.180667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.180948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.180991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.181231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.181257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.181497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.181523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.181736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.181761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.182033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.182059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.182339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.182367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.182609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.182635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.182887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.182913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.183164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.183189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.183409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.183449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.183743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.183772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.184047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.184072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.184309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.184334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.184570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.184611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.184878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.184908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.185172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.185198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.185484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.185509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.185748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.185774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.186077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.186106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.186396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.186422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.186670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.186696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.186958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.186988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.187247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.187275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.187566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.187592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.187852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.187878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.188101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.188127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.188359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.188385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.188619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.188645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.188983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.189012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.189289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.189315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.189575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.189600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.189846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.189872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.190115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.190140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.190679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.190704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.190925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.190951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.191218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.191243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.191789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.191855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.192114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.192139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.630 qpair failed and we were unable to recover it. 00:33:52.630 [2024-07-20 18:09:27.192402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.630 [2024-07-20 18:09:27.192428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.192661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.192686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.192928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.192959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.193212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.193240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.193525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.193558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.193816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.193842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.194080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.194105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.194404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.194429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.194901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.194929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.195195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.195221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.195457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.195483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.195748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.195776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.196051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.196076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.196373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.196398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.196678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.196704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.196944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.196970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.197208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.197238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.197483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.197510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.197811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.197840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.198140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.198166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.198441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.198467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.198709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.198735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.198981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.199007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.199319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.199389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.199657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.199683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.199927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.199953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.200164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.200189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.200396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.200421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.200656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.200681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.200924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.200950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.201195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.201220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.201458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.201487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.201718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.201746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.202020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.202046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.202264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.202289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.202554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.202579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.202819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.202845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.203058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.203084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.203374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.203400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.203673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.203701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.203991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.204017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.204257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.204283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.204526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.204552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.204774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.204808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.205022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.205048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.205263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.205288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.205526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.205551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.205790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.205823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.206095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.206123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.206383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.206409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.206665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.206693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.631 qpair failed and we were unable to recover it. 00:33:52.631 [2024-07-20 18:09:27.206994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.631 [2024-07-20 18:09:27.207020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.207286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.207311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.207578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.207603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.207838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.207866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.208151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.208180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.208467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.208492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.208736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.208762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.209005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.209035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.209298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.209324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.209573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.209598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.209815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.209841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.210054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.210079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.210344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.210370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.210606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.210635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.210938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.210964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.211214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.211242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.211508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.211533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.211777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.211811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.212030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.212055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.212324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.212352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.212653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.212682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.212953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.212979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.213197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.213223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.213492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.213517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.213762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.213787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.214005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.214031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.214262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.214288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.214532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.214557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.214872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.214898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.215110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.215135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.215374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.215399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.215636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.215662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.215879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.215904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.216145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.216173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.216445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.216470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.216775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.216808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.217051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.217076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.217289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.217315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.217534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.632 [2024-07-20 18:09:27.217559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.632 qpair failed and we were unable to recover it. 00:33:52.632 [2024-07-20 18:09:27.217821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.217850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.218141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.218169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.218458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.218484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.218905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.218931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.219245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.219275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.219546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.219574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.219837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.219863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.220079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.220105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.220356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.220382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.220663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.220707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.220986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.221017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.221279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.221305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.221574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.221600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.221863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.221893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.222157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.222185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.222435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.222461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.222801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.222826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.223061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.223086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.223348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.223376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.223645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.223670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.223963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.223989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.224279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.224304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.224545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.224570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.224813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.224839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.225050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.225075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.225358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.225386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.225649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.225697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.225966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.225992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.226256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.226282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.226786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.226842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.227099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.227142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.227435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.227460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.227667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.227694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.227980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.228006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.228293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.228321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.228582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.228608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.228863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.228889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.229127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.229153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.229444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.229472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.229735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.229764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.230059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.230085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.230323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.230348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.230588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.230613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.230848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.230875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.231138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.231166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.231682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.231708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.231944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.231971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.232212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.232238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.232504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.633 [2024-07-20 18:09:27.232529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.633 qpair failed and we were unable to recover it. 00:33:52.633 [2024-07-20 18:09:27.232740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.232769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.233028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.233053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.233316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.233341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.233588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.233613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.233891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.233917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.234157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.234185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.234441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.234467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.234707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.234733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.234983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.235009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.235271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.235299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.235587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.235612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.235898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.235924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.236196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.236221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.236431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.236458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.236702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.236728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.236971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.236997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.237241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.237267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.237483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.237509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.237727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.237754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.237965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.237991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.238221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.238250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.238508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.238537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.238805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.238831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.239050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.239076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.239338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.239363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.239565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.239607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.239850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.239878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.240121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.240147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.240374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.240399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.240689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.240717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.240980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.241006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.241283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.241308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.241572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.241597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.241813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.241840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.242086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.242111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.242383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.242412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.242681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.242723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.242969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.242995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.243234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.243260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.243524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.243549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.243774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.243813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.244031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.244057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.244292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.244317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.244561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.244587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.244831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.244857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.245128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.245156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.245386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.245412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.245649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.245674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.245920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.245949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.246239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.246267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.634 qpair failed and we were unable to recover it. 00:33:52.634 [2024-07-20 18:09:27.246551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.634 [2024-07-20 18:09:27.246576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.246790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.246825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.247042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.247068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.247348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.247377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.247632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.247673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.247962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.247988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.248268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.248295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.248529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.248556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.248759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.248785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.249054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.249083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.249368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.249396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.249695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.249720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.249995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.250021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.250284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.250312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.250629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.250695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.250988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.251014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.251235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.251260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.251509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.251535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.251771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.251808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.252056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.252084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.252367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.252393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.252684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.252709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.252927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.252953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.253235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.253264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.253499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.253524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.253766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.253805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.254096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.254121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.254392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.254417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.254649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.254675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.254926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.254955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.255228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.255262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.255560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.255586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.255894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.255929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.256229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.256257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.256516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.256546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.256809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.256837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.257094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.257134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.257385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.257411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.257664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.257692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.257943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.257973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.258214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.258240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.635 qpair failed and we were unable to recover it. 00:33:52.635 [2024-07-20 18:09:27.258485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.635 [2024-07-20 18:09:27.258511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.258744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.258769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.259019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.259045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.259253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.259293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.259599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.259627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.259898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.259924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.260203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.260229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.260442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.260467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.260718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.260746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.261031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.261060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.261328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.261357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.261607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.261632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.261871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.261897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.262135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.262160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.262483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.262559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.262851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.262877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.263148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.263173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.263407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.263434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.263714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.263744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.264012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.264038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.264249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.264276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.264542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.264570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.264899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.264925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.265167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.265192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.265421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.265447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.265675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.265700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.265972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.266002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.266260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.266285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.266547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.266575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.266842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.266872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.267107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.267133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.267351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.267391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.267633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.267657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.267948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.267974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.268230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.268258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.268523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.268548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.268801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.268845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.269099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.269128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.269394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.269423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.269694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.269718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.269964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.269991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.270255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.270281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.270518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.270546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.270782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.270815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.271029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.271056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.271368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.271433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.271680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.271705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.271910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.271935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.272198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.272225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.272685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.272709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.272976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.273002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.636 qpair failed and we were unable to recover it. 00:33:52.636 [2024-07-20 18:09:27.273218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.636 [2024-07-20 18:09:27.273244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.273533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.273562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.273867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.273893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.274182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.274207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.274444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.274470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.274713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.274738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.275024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.275053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.275342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.275371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.275662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.275687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.275903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.275928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.276144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.276169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.276413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.276442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.276768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.276799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.277081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.277125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.277365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.277390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.277602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.277627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.277859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.277885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.278134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.278163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.278449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.278484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.278806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.278833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.279068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.279093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.279305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.279347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.279685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.279778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.280096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.280124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.280410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.280436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.280715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.280742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.281001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.281027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.281297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.281322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.281676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.281734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.281984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.282011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.282249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.282275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.282542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.282567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.282810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.282836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.283104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.283132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.283396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.283422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.283688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.283713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.283962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.283988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.284261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.284289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.284520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.284548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.284825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.284851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.285113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.285138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.285342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.285367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.285581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.285606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.285841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.285867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.286093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.286119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.286359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.286385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.286615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.286640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.286897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.286927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.637 [2024-07-20 18:09:27.287191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.637 [2024-07-20 18:09:27.287216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.637 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.287513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.287541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.287834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.287860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.288105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.288130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.288377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.288402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.288711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.288739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.289024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.289050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.289324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.289351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.289681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.289739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.290006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.290032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.290293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.290326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.290584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.290613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.290850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.290876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.291093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.291119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.291371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.291400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.291661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.291689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.291990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.292016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.292298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.292323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.292557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.292584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.292813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.292844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.293126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.293152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.293433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.293461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.293747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.293772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.294038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.294064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.294308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.294334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.294568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.294597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.294849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.294876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.295104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.295132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.295420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.295446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.295684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.295709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.295983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.296011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.296281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.296309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.296546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.296572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.296790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.296823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.297066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.297091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.297300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.297325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.297595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.297621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.297896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.297926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.298195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.298220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.298493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.298518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.298810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.298854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.299135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.299163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.299399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.299427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.299697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.299722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.299990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.300016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.300259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.300284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.300559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.300587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.300873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.300899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.638 qpair failed and we were unable to recover it. 00:33:52.638 [2024-07-20 18:09:27.301103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.638 [2024-07-20 18:09:27.301128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.301355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.301380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.301583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.301613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.301885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.301914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.302172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.302198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.302418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.302444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.302655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.302682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.302922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.302948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.303195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.303222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.303577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.303640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.303910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.303936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.304203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.304229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.304497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.304522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.304731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.304756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.304980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.305007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.305217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.305244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.305492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.305518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.305731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.305756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.306061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.306091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.306415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.306481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.306743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.306768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.307020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.307046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.307308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.307334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.307558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.307588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.307819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.307846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.308121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.308146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.308385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.308414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.308662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.308687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.308932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.308958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.309223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.309251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.309539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.309565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.309807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.309833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.310072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.310097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.310389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.310417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.310673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.310716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.310982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.311008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.311250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.311275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.311521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.311549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.311856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.311882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.312147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.312175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.312415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.312440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.639 qpair failed and we were unable to recover it. 00:33:52.639 [2024-07-20 18:09:27.312659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.639 [2024-07-20 18:09:27.312685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.312955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.312988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.313253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.313282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.313575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.313600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.313864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.313890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.314134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.314159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.314396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.314424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.314682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.314708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.314971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.315000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.315298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.315323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.315561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.315587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.315878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.315904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.316166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.316195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.316436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.316465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.316766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.316791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.317025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.317051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.317266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.317291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.317488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.317513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.317773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.317821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.318065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.318090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.318356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.318381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.318685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.318710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.318966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.318996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.319256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.319281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.319524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.319554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.319830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.319856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.320059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.320084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.320299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.320325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.320627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.320656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.320916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.320945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.321240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.321266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.321524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.321549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.321755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.321780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.322060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.322089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.322385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.322414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.322652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.322679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.322949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.322975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.323215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.323241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.323490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.323519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.323852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.323878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.324118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.324147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.324407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.324437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.324671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.324696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.324915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.324941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.325241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.325269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.325553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.325581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.325856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.325882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.326097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.640 [2024-07-20 18:09:27.326122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.640 qpair failed and we were unable to recover it. 00:33:52.640 [2024-07-20 18:09:27.326340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.326365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.326629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.326655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.326896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.326922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.327138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.327163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.327366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.327392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.327611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.327653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.327932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.327959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.328227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.328253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.328519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.328544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.328789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.328827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.329117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.329144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.329409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.329434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.329651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.329677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.329924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.329953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.330198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.330223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.330433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.330459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.330674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.330700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.330912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.330939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.331145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.331187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.331444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.331470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.331708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.331734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.331968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.331998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.332289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.332316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.332531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.332566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.332803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.332829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.333041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.333085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.333320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.333348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.333641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.333666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.333910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.333937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.334195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.334227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.334469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.334497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.334768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.334800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.335074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.335100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.335375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.335410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.335696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.335724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.336007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.336034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.336270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.336296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.336551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.336579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.336867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.336896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.337159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.337188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.337460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.337486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.337769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.337805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.338074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.338104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.338396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.338424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.338691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.338716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.338939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.338966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.339259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.339288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.339578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.339604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.339857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.641 [2024-07-20 18:09:27.339883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.641 qpair failed and we were unable to recover it. 00:33:52.641 [2024-07-20 18:09:27.340161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.340187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.340447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.340475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.340873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.340901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.341152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.341177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.341428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.341457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.341741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.341769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.342083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.342109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.342339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.342364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.342724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.342787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.343067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.343095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.343394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.343422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.343706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.343749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.344041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.344068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.344355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.344385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.344647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.344676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.344949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.344976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.345274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.345302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.345540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.345570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.345826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.345856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.346249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.346286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.346735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.346790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.347060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.347085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.347378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.347407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.347668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.347694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.347965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.348003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.348288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.348317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.348577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.348604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.348970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.348999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.349263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.349292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.349538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.349567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.349867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.349896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.350181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.350207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.350514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.350542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.350813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.350841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.351105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.351133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.351435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.351461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.351708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.351737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.352006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.352034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.352311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.352341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.352601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.352627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.352979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.353008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.353310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.353335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.353621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.353651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.353922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.353948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.354215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.354243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.354572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.354624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.354896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.354931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.355195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.355222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.355702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.355754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.356039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.642 [2024-07-20 18:09:27.356065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.642 qpair failed and we were unable to recover it. 00:33:52.642 [2024-07-20 18:09:27.356379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.356405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.356698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.356738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.357040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.357075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.357380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.357408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.357696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.357725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.358016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.358042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.358327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.358366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.358626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.358654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.358945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.358971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.359220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.359245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.359749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.359779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.360100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.360143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.360439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.360468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.360747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.360772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.361242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.361309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.361615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.361647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.361912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.361942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.362202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.362229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.362532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.362560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.362831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.362867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.363136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.363164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.363464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.363489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.363907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.363936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.364226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.364254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.364516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.364544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.364806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.364840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.365094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.365124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.365349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.365378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.365653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.365682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.365962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.365989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.366288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.366328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.366592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.366620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.366891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.366920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.367193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.367219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.367549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.367590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.367821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.367851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.368133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.368161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.368427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.643 [2024-07-20 18:09:27.368451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.643 qpair failed and we were unable to recover it. 00:33:52.643 [2024-07-20 18:09:27.368809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.368851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.369094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.369119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.369354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.369381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.369699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.369725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.370010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.370039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.370331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.370360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.370622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.370651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.370895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.370920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.371391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.371420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.371887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.371916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.372181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.372209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.372472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.372499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.372903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.372933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.373227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.373255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.373510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.373540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.373913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.373942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.374206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.374235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.374512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.374541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.374810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.374839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.375080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.375107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.375418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.375447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.375921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.375950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.376212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.376241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.376490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.376515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.376831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.376860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.377122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.377150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.377403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.377431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.377691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.377716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.377966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.377996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.378259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.378287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.378561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.378589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.378836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.378864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.379122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.379151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.379441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.379469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.379731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.379758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.380057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.380084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.380384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.380409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.380622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.380663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.380949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.380978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.381212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.381238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.381515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.381545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.381821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.381851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.382136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.382165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.382437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.382467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.382910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.382939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.383204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.383233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.383519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.383548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.383820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.383847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.384121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.384150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.384391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.384420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.644 qpair failed and we were unable to recover it. 00:33:52.644 [2024-07-20 18:09:27.384684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.644 [2024-07-20 18:09:27.384714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.385002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.385045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.385281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.385309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.385577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.385605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.385901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.385930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.386221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.386247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.386489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.386517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.386806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.386835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.387094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.387123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.387431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.387456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.387701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.387730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.387966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.387995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.388261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.388289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.388527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.388552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.388805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.388833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.389117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.389143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.389439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.389467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.389723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.389748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.390027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.390053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.390352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.390377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.390599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.390625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.390842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.390868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.391089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.391117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.391358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.391385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.391901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.391927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.392204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.392229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.392781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.392838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.393129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.393155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.393360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.393385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.393622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.393649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.393911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.393942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.394226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.394256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.394544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.394573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.394835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.394866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.395104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.395130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.395376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.395404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.395686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.395715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.396005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.396031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.396329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.396358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.396678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.396706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.396995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.397024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.397317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.397343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.397627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.397652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.397985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.398014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.398271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.398300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.398670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.398729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.399000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.399029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.399299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.399329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.399625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.645 [2024-07-20 18:09:27.399654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.645 qpair failed and we were unable to recover it. 00:33:52.645 [2024-07-20 18:09:27.399920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.399946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.400288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.400353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.400626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.400651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.400930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.400959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.401193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.401219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.401463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.401488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.401725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.401753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.402038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.402067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.402298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.402323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.402564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.402592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.402865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.402894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.403170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.403199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.403547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.403572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.403876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.403905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.404183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.404211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.404478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.404506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.404747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.404772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.405022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.405048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.405318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.405347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.405583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.405613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.405870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.405896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.646 [2024-07-20 18:09:27.406179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.646 [2024-07-20 18:09:27.406208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.646 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.406478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.406506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.406770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.406807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.407077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.407106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.407347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.407376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.407633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.407661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.407941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.407967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.408172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.408198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.408473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.408501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.408735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.408764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.409063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.409089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.409365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.409390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.409629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.409659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.409914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.409943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.410185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.410213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.410478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.410505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.917 [2024-07-20 18:09:27.410851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.917 [2024-07-20 18:09:27.410877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.917 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.411121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.411147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.411421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.411448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.411871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.411897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.412135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.412161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.412407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.412436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.412729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.412782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.413056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.413081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.413375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.413401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.413652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.413680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.413946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.413972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.414186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.414213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.414475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.414504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.414778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.414813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.415129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.415158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.415421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.415447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.415726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.415754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.416052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.416083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.416321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.416349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.416840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.416866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.417089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.417114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.417323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.417348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.417649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.417678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.417912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.417938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.418192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.418221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.418517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.418546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.418809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.418839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.419113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.419143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.419419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.419448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.419884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.419910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.420172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.420200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.420433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.420458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.420727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.420755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.421038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.421065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.421365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.421392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.421687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.421712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.422013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.422043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.422309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.422338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.422600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.422628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.422956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.422982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.423284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.423312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.423610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.423638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.423908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.423933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.424176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.424201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.424465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.424495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.918 [2024-07-20 18:09:27.424820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.918 [2024-07-20 18:09:27.424888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.918 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.425119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.425145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.425381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.425407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.425708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.425736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.426038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.426068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.426331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.426360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.426708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.426734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.427036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.427062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.427385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.427413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.427706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.427734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.427999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.428027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.428309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.428337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.428627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.428654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.428924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.428950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.429165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.429191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.429436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.429464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.429721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.429751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.430025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.430054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.430314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.430340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.430601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.430629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.430893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.430922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.431155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.431184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.431470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.431500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.431811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.431859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.432150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.432175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.432457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.432486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.432913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.432938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.433163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.433189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.433432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.433457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.433671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.433696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.433977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.434004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.434396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.434453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.434695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.434722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.434983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.435012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.435307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.435333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.435640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.435669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.435912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.435939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.436212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.436241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.436559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.436606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.436883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.436909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.437333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.437393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.437679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.437706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.438047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.438077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.438352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.438383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.438620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.438649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.438888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.438918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.439194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.439218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.439610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.439664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.439958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.919 [2024-07-20 18:09:27.439984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.919 qpair failed and we were unable to recover it. 00:33:52.919 [2024-07-20 18:09:27.440269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.440295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.440765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.440823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.441089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.441117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.441411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.441436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.441686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.441715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.441997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.442024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.442301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.442330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.442573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.442602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.442866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.442896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.443145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.443171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.443408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.443437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.443704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.443732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.443976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.444002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.444240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.444271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.444576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.444604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.444862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.444888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.445145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.445173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.445432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.445457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.445814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.445843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.446072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.446101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.446372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.446400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.446668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.446693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.447022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.447049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.447316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.447344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.447574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.447602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.447895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.447921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.448216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.448244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.448535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.448563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.448869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.448895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.449205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.449230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.449508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.449537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.449843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.449872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.450129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.450157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.450439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.450464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.450819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.450878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.451173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.451201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.451439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.451467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.451817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.451858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.452151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.452179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.452422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.452450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.452690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.452720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.452996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.453024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.453352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.453378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.453680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.453708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.454001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.454030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.454299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.454326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.454657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.454685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.454960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.920 [2024-07-20 18:09:27.454990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.920 qpair failed and we were unable to recover it. 00:33:52.920 [2024-07-20 18:09:27.455260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.455289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.455583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.455608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.455921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.455948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.456159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.456185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.456537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.456561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.456811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.456858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.457260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.457320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.457626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.457655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.457924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.457952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.458226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.458250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.458531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.458559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.458801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.458830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.459117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.459143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.459441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.459466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.459807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.459849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.460101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.460130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.460353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.460383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.460884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.460909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.461204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.461232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.461496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.461525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.461786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.461828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.462098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.462123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.462412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.462437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.462728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.462756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.463060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.463086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.463331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.463356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.463662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.463693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.463961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.463996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.464293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.464322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.464719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.464769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.465073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.465117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.465411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.465440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.465730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.465759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.466016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.466043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.466321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.466349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.466584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.466613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.921 qpair failed and we were unable to recover it. 00:33:52.921 [2024-07-20 18:09:27.466861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.921 [2024-07-20 18:09:27.466888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.467171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.467197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.467522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.467550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.467810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.467852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.468299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.468343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.468647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.468677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.468942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.468968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.469236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.469265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.469512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.469542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.469837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.469884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.470101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.470126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.470372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.470400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.470686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.470714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.471003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.471029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.471305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.471334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.471600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.471625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.471883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.471913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.472161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.472185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.472439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.472468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.472753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.472781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.473047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.473091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.473555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.473611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.473884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.473909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.474148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.474178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.474446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.474475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.474734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.474761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.475023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.475049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.475330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.475360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.475647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.475675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.475937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.475964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.476240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.476269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.476543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.476572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.476868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.476894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.477182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.477221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.477487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.477515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.477752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.477780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.478055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.478081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.478379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.478422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.478711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.478736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.479165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.479228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.479505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.479533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.479865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.479912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.480190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.480219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.480545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.480619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.480916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.480942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.481167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.481193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.481505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.481533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.481807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.481833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.482107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.482135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.922 qpair failed and we were unable to recover it. 00:33:52.922 [2024-07-20 18:09:27.482426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.922 [2024-07-20 18:09:27.482456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.482780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.482817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.483051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.483092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.483353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.483383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.483638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.483663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.483948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.483978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.484239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.484268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.484551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.484577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.484888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.484914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.485156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.485186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.485425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.485454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.485715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.485748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.486079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.486105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.486349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.486379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.486629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.486659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.486928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.486955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.487252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.487294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.487558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.487587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.487827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.487857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.488121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.488149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.488406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.488430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.488698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.488724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.488975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.489006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.489304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.489332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.489593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.489619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.489893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.489919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.490147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.490176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.490473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.490502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.490950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.490975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.491272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.491300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.491578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.491606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.491899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.491928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.492184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.492209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.492500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.492529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.492787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.492825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.493093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.493121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.493408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.493434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.493735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.493764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.494092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.494137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.494406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.494434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.494901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.494931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.495180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.495205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.495444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.495488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.495772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.495809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.496073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.496100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.496396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.496421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.496749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.496777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.497093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.497121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.497407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.497432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.497738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.923 [2024-07-20 18:09:27.497767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.923 qpair failed and we were unable to recover it. 00:33:52.923 [2024-07-20 18:09:27.498064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.498090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.498392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.498421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.498901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.498926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.499173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.499198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.499453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.499481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.499709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.499739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.500124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.500168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.500473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.500503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.500938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.500968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.501223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.501252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.501536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.501562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.501868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.501894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.502360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.502403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.502689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.502719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.503016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.503043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.503318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.503346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.503605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.503633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.503912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.503939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.504208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.504234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.504549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.504577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.504860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.504889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.505124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.505154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.505416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.505442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.505744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.505772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.506063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.506092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.506337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.506366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.506654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.506679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.506955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.506982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.507228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.507257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.507496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.507524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.507780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.507838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.508072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.508115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.508377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.508405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.508674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.508699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.508939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.508966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.509179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.509222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.509512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.509538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.509785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.509821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.510081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.510106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.510372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.510400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.510684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.510712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.510975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.511001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.511218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.511243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.511459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.511484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.511715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.511743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.924 qpair failed and we were unable to recover it. 00:33:52.924 [2024-07-20 18:09:27.512017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.924 [2024-07-20 18:09:27.512043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.512315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.512340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.512608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.512636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.512903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.512933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.513176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.513204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.513492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.513517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.513761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.513789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.514060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.514086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.514349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.514377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.514631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.514656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.514926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.514953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.515195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.515220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.515489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.515518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.515776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.515814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.516069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.516112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.516369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.516398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.516662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.516690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.516997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.517024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.517275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.517304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.517604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.517633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.517924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.517953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.518241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.518267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.518556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.518585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.518854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.518880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.519144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.519172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.519424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.519453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.519734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.519763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.520031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.520058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.520329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.520357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.520624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.520650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.520919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.520948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.521268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.521336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.521634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.521663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.521944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.521969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.522256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.522284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.522572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.522600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.522857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.522886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.523163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.523189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.523470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.523498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.523768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.523802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.524043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.524068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.524350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.524375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.524683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.524711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.524984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.525009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.525290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.525319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.525635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.525676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.525962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.525988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.526257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.526287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.526571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.925 [2024-07-20 18:09:27.526600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.925 qpair failed and we were unable to recover it. 00:33:52.925 [2024-07-20 18:09:27.526882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.526908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.527127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.527152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.527561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.527610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.527901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.527928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.528208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.528233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.528479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.528504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.528901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.528930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.529195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.529224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.529478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.529503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.529748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.529773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.530106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.530135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.530396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.530421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.530820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.530884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.531166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.531194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.531485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.531527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.531817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.531859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.532156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.532195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.532497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.532525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.532812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.532856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.533185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.533214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.533522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.533562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.533851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.533879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.534146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.534173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.534441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.534470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.534896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.534921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.535164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.535190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.535475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.535503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.535772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.535809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.536065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.536091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.536329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.536356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.536630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.536660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.536952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.536978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.537361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.537408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.537693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.537721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.537952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.537982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.538246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.538274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.538660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.538721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.538985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.539013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.539304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.539332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.539581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.539609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.539918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.539958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.540216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.540245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.540511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.540540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.540855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.540885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.541163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.541189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.541470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.541497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.541820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.541854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.542147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.542176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.542469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.542494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.542755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.542783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.543178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.543235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.543516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.543544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.543867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.543894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.544160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.926 [2024-07-20 18:09:27.544185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.926 qpair failed and we were unable to recover it. 00:33:52.926 [2024-07-20 18:09:27.544454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.544483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.544726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.544755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.545033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.545060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.545365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.545394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.545630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.545659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.545898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.545928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.546278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.546306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.546586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.546614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.546923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.546952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.547202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.547232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.547594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.547619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.547921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.547950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.548214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.548242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.548504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.548532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.548798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.548824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.549094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.549124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.549365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.549394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.549661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.549689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.549958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.549983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.550240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.550268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.550523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.550548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.550855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.550884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.551122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.551147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.551419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.551447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.551886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.551915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.552157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.552186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.552454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.552478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.552922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.552952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.553214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.553242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.553483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.553516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.553870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.553899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.554149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.554177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.554467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.554494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.554757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.554785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.555055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.555081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.555410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.555438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.555874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.555905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.556173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.556201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.556484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.556510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.556760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.556788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.557089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.557117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.557380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.557408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.557857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.557885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.558186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.558215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.558501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.558529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.558806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.558832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.559149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.559175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.559565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.559615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.559903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.559933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.560190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.927 [2024-07-20 18:09:27.560218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.927 qpair failed and we were unable to recover it. 00:33:52.927 [2024-07-20 18:09:27.560549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.560615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.560904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.560934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.561198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.561226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.561516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.561545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.561833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.561860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.562133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.562161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.562453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.562482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.562747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.562775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.563172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.563215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.563525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.563567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.563824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.563850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.564132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.564160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.564437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.564461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.564900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.564929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.565219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.565247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.565512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.565540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.565804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.565830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.566101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.566129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.566395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.566425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.566724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.566758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.567055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.567082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.567381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.567409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.567698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.567726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.567992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.568032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.568303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.568328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.568677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.568705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.569000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.569029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.569324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.569352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.569591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.569618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.569873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.569902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.570167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.570195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.570452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.570480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.570716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.570742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.570966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.570992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.571317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.571378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.571639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.571669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.571962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.571988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.572245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.572275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.572527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.572555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.572818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.572847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.573112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.573137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.573402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.928 [2024-07-20 18:09:27.573430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.928 qpair failed and we were unable to recover it. 00:33:52.928 [2024-07-20 18:09:27.573782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.573819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.574085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.574112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.574505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.574555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.574820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.574848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.575114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.575143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.575416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.575444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.575736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.575776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.576062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.576091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.576381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.576406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.576716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.576741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.577018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.577044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.577348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.577376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.577659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.577687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.577979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.578008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.578294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.578319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.578836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.578867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.579158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.579186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.579452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.579487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.579745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.579771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.580035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.580060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.580319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.580347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.580629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.580657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.580963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.581005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.581296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.581322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.581603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.581631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.581916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.581945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.582200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.582225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.582752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.582812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.583121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.583146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.583393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.583418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.583695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.583720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.583989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.584015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.584351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.584416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.584714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.584755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.585038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.585065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.585305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.585335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.585569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.585596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.585855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.585884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.586138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.586163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.586491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.586517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.586789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.586824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.587038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.587064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.587325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.587350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.587625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.587655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.587951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.587977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.588225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.588252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.588491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.588517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.588822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.588851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.589147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.589176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.589460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.589485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.589698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.589739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.589966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.929 [2024-07-20 18:09:27.589992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.929 qpair failed and we were unable to recover it. 00:33:52.929 [2024-07-20 18:09:27.590335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.590386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.590633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.590662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.590922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.590948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.591162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.591189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.591429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.591455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.591729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.591758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.592011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.592038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.592340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.592368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.592672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.592698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.592956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.592985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.593247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.593272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.593534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.593564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.593868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.593894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.594159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.594185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.594421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.594446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.594885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.594913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.595175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.595203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.595496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.595521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.595772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.595810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.596028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.596053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.596321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.596350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.596617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.596645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.596889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.596915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.597134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.597159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.597362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.597387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.597629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.597673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.597956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.597983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.598260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.598290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.598575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.598601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.598811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.598838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.599048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.599076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.599307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.599332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.599612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.599640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.599886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.599914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.600243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.600269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.600484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.600510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.600808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.600837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.601092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.601120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.601367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.601392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.601641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.601667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.601910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.601936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.602181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.602206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.602453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.602478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.602774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.602810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.603090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.603115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.603392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.603425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.603755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.603782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.930 [2024-07-20 18:09:27.604043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.930 [2024-07-20 18:09:27.604069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.930 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.604335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.604360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.604633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.604659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.604944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.604970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.605290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.605360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.605641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.605666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.606020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.606046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.606330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.606355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.606573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.606618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.606887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.606914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.607141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.607170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.607424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.607448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.607786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.607819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.608042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.608068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.608273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.608298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.608537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.608563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.608811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.608841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.609127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.609154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.609370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.609395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.609619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.609645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.609896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.609922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.610151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.610176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.610416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.610441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.610681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.610706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.610971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.610997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.611275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.611304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.611567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.611593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.611832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.611857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.612145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.612172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.612371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.612396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.612629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.612657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.612928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.612956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.613233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.613261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.613553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.613578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.613873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.613899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.614254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.614282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.614544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.614572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.614823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.614866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.615112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.615141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.615358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.615385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.615602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.615628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.615866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.615891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.616113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.616139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.616369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.616394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.931 [2024-07-20 18:09:27.616639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.931 [2024-07-20 18:09:27.616665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.931 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.616930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.616956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.617222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.617250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.617531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.617572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.617828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.617857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.618112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.618137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.618348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.618375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.618581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.618608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.618901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.618927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.619165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.619191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.619411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.619440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.619675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.619702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.619960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.619990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.620237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.620262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.620500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.620526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.620765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.620789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.621074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.621099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.621368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.621396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.621658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.621700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.621953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.621980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.622221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.622246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.622489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.622517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.622756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.622784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.623079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.623104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.623381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.623406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.623635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.623661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.623876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.623902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.624137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.624162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.624412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.624441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.624673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.624715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.624937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.624964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.625176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.625202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.625413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.625439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.625712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.625740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.626041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.626075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.626304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.626329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.626656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.626681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.626928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.626954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.627184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.627228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.627484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.627510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.627779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.627814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.628060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.628085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.628346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.628373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.628658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.628682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.628893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.628919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.629128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.629169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.629441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.629466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.629709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.629734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.630030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.630059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.630344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.630372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.630630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.630673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.932 [2024-07-20 18:09:27.630917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.932 [2024-07-20 18:09:27.630944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.932 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.631186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.631211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.631529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.631596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.631858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.631887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.632151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.632176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.632410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.632436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.632648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.632673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.632931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.632957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.633232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.633257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.633524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.633553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.633837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.633878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.634127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.634154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.634364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.634389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.634606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.634647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.634880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.634906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.635124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.635152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.635406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.635431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.635648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.635674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.635921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.635961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.636252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.636283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.636551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.636576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.636816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.636842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.637081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.637107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.637336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.637363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.637628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.637654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.637927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.637957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.638316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.638342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.638585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.638612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.638829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.638855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.639129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.639158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.639571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.639627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.639917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.639943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.640162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.640187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.640391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.640418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.640668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.640707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.640964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.640992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.641235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.641261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.641491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.641518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.641758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.641783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.642008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.642034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.642296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.642321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.642701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.642727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.643031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.643057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.643279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.643304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.643593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.643619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.643936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.643962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.644205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.644230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.644542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.644581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.644822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.644848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.645063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.645105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.645374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.645408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.645682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.645707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.645926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.645952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.933 [2024-07-20 18:09:27.646194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.933 [2024-07-20 18:09:27.646219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.933 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.646635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.646686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.646946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.646971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.647206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.647232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.647496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.647521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.647752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.647777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.648013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.648038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.648238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.648263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.648475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.648500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.648740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.648767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.649018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.649045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.649276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.649316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.649822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.649877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.650105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.650135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.650381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.650408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.650883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.650911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.651153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.651181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.651601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.651655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.651909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.651936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.652201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.652244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.652665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.652716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.653020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.653065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.653335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.653379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.653845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.653872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.654144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.654195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.654729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.654779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.655049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.655075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.655342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.655391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.655714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.655742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.655967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.655994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.656235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.656278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.656629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.656681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.656977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.657024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.657490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.657552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.657801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.657844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.658081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.658123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.658480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.658534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.658819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.658849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.659119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.659163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.659423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.659467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.659746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.659771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.660038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.660066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.660380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.660412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.660745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.660770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.661023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.661050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.661398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.661470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.661784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.661817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.662130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.662171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.662455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.662498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.662779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.662823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.663185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.934 [2024-07-20 18:09:27.663231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:52.934 qpair failed and we were unable to recover it. 00:33:52.934 [2024-07-20 18:09:27.663612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.663651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.663901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.663929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.664180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.664209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.664500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.664528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.664860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.664886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.665143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.665168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.665416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.665441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.665677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.665703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.665951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.665977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.666278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.666303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.666583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.666611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.666871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.666899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.667291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.667351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.667620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.667647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.668036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.668063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.668363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.668392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.668868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.668893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.669146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.669174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.669412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.669440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.669684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.669712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.669972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.670001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.670266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.670294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.670562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.670590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.670888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.670914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.671232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.671275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.671514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.671542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.671819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.671845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.672104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.672137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.672676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.672725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.673010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.673036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.673454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.673499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.673760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.673785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.674058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.674083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.674529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.674581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.674852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.674878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.675166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.675195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.675545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.675573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.675825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.675851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.676113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.676141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.676415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.676443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.676773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.676805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.677080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.677108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.677375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.677403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.677642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.935 [2024-07-20 18:09:27.677670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.935 qpair failed and we were unable to recover it. 00:33:52.935 [2024-07-20 18:09:27.677918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.677944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.678215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.678243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.678541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.678567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.678809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.678835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.679102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.679127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.679431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.679459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.679756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.679782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.680089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.680119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.680578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.680626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.680896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.680922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.681216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.681264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.681554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.681580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.681865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.681898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.682177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.682203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.682482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.682510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.682753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.682781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.683018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.683043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.683325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.683353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.683633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.683657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.683899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.683925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.684166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.684191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.684432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.684457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.684695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.684736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.685023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.685049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.685355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.685380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.685602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.685627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.685917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.685942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.686204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.686232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.686495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.686520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.686811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.686855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.687111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.687154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.687377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.687405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.687847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.687890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.688180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.688208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.688485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.688510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.688782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.688815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.689061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.689086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.689346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.689375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.689616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.689641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.689944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.689970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.690187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.690212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.690490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.690518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.690787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.690839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.691103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.691129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.691374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.691400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.691676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.691705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.691972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.691999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.692243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.692272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.692579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.692635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.692865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.692891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.693138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.693166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.693444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.936 [2024-07-20 18:09:27.693473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.936 qpair failed and we were unable to recover it. 00:33:52.936 [2024-07-20 18:09:27.693917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.693943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.694154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.694179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.694471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.694496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.694761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.694789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.695065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.695091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.695377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.695404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.695698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.695727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.695991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.696028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.696263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.696291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.696533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.696559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.696863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.696890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.697179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.697207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.697514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.697561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.697816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.697859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.698073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.698102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.698380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.698408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.698687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.698713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.698952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.698978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.699249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.699277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.699541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.699570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:52.937 [2024-07-20 18:09:27.699850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.937 [2024-07-20 18:09:27.699888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:52.937 qpair failed and we were unable to recover it. 00:33:53.208 [2024-07-20 18:09:27.700122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.208 [2024-07-20 18:09:27.700149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.208 qpair failed and we were unable to recover it. 00:33:53.208 [2024-07-20 18:09:27.700388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.208 [2024-07-20 18:09:27.700415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.208 qpair failed and we were unable to recover it. 00:33:53.208 [2024-07-20 18:09:27.700649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.208 [2024-07-20 18:09:27.700694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.208 qpair failed and we were unable to recover it. 00:33:53.208 [2024-07-20 18:09:27.700955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.208 [2024-07-20 18:09:27.700981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.208 qpair failed and we were unable to recover it. 00:33:53.208 [2024-07-20 18:09:27.701219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.701248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.701504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.701530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.701762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.701791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.702086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.702114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.702352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.702380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.702618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.702647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.702910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.702936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.703169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.703195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.703441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.703466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.703705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.703730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.703948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.703974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.704211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.704236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.704526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.704552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.704815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.704840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.705085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.705128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.705421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.705447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.705721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.705749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.706020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.706046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.706301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.706327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.706609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.706634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.706883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.706909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.707125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.707151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.707416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.707442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.707756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.707783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.708061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.708090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.708353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.708381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.708645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.708673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.708937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.708963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.709202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.709232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.709473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.709500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.709735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.709764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.710058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.710084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.710391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.710419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.710838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.710897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.711146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.711172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.711619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.711670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.711966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.711992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.712263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.712292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.712556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.712583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.712825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.712852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.713120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.713149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.713413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.713441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.713701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.713726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.713945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.713970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.714206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.714231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.714527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.714555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.714843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.209 [2024-07-20 18:09:27.714869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.209 qpair failed and we were unable to recover it. 00:33:53.209 [2024-07-20 18:09:27.715135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.715160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.715416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.715444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.715706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.715734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.716008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.716033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.716245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.716271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.716561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.716586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.716895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.716921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.717169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.717194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.717622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.717678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.717950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.717975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.718242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.718267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.718473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.718498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.718737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.718763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.719010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.719036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.719330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.719358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.719629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.719656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.719909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.719935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.720170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.720196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.720501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.720541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.720811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.720840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.721146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.721172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.721427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.721452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.721697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.721722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.721939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.721966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.722394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.722444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.722714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.722740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.722990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.723016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.723288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.723316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.723592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.723633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.723889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.723915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.724153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.724178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.724446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.724475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.724934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.210 [2024-07-20 18:09:27.724960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.210 qpair failed and we were unable to recover it. 00:33:53.210 [2024-07-20 18:09:27.725181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.725206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.725448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.725476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.725919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.725944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.726215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.726241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.726546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.726574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.726860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.726885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.727175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.727204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.727724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.727777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.728005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.728030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.728265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.728291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.728499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.728526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.728822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.728852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.729107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.729133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.729360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.729385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.729646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.729674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.729940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.729966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.730244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.730272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.730540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.730568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.730866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.730892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.731162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.731190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.731425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.731453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.731686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.731714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.731994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.732020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.732365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.732432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.732702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.732728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.732938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.732965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.733207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.733232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.733444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.733469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.733736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.733764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.734033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.734063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.734312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.734337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.734593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.734634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.734913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.734946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.735191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.735230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.735487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.735517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.735777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.735815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.736069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.736098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.736366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.736395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.736637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.736667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.736986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.737016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.737276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.737304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.737754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.737813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.738066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.738092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.738335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.738362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.738600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.738625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.738911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.738938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.739205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.739230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.739491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.739516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.739924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.739950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.740191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.740232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.211 [2024-07-20 18:09:27.740500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.211 [2024-07-20 18:09:27.740528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.211 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.740905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.740934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.741229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.741254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.741520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.741545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.741787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.741818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.742081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.742110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.742375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.742411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.742680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.742708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.742974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.743000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.743239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.743265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.743476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.743503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.743765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.743803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.744070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.744097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.744385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.744413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.744711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.744740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.744975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.745001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.745235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.745261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.745519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.745549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.745838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.745867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.746111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.746141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.746499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.746551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.746800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.746827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.747058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.747086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.747358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.747387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.747801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.747827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.748071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.748096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.748361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.748388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.748604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.748630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.748910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.748936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.749180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.749206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.749471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.749513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.749908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.749934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.750143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.750168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.750396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.750423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.750835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.750879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.751154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.751185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.751605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.751632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.212 [2024-07-20 18:09:27.751855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.212 [2024-07-20 18:09:27.751881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.212 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.752120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.752148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.752585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.752638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.752861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.752887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.753125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.753150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.753453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.753510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.753773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.753807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.754055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.754082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.754489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.754546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.754816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.754859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.755088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.755133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.755582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.755633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.755901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.755927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.756145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.756171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.756414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.756439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.756659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.756685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.756907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.756949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.757190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.757220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.757751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.757777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.758017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.758044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.758297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.758327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.758757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.758842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.759112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.759138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.759360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.759394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.759637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.759663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.759973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.760000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.760259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.760287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.760695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.760744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.761024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.761053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.761358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.761386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.761658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.761684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.761946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.761975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.762213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.762242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.762643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.762669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.762911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.762937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.763180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.763207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.763416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.763441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.763695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.763723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.764002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.764029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.764321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.764347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.764671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.764732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.764979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.765005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.765277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.765308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.765752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.765812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.766092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.766121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.766575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.766639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.766972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.766998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.767282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.213 [2024-07-20 18:09:27.767310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.213 qpair failed and we were unable to recover it. 00:33:53.213 [2024-07-20 18:09:27.767759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.767822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.768076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.768101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.768400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.768433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.768914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.768940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.769172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.769215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.769484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.769510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.769731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.769757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.770013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.770039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.770313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.770342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.770612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.770638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.770894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.770920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.771211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.771239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.771506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.771535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.771790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.771839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.772092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.772136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.772411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.772439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.772711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.772741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.773014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.773041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.773314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.773340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.773865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.773891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.774151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.774180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.774426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.774452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.774916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.774942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.775204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.775232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.775656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.775706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.775994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.776020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.776311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.776338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.776555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.776582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.776887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.776913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.777165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.777195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.777712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.777764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.778033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.778059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.778342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.778371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.778653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.778679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.778958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.778985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.779327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.779381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.779670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.779699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.779956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.779982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.780220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.780246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.780503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.780532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.780802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.780856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.781100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.781125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.781370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.781398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.781640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.781669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.781919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.781946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.782164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.782190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.782477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.782505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.782748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.782783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.783073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.783099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.214 qpair failed and we were unable to recover it. 00:33:53.214 [2024-07-20 18:09:27.783369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.214 [2024-07-20 18:09:27.783395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.783697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.783726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.783998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.784026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.784297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.784325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.784611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.784646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.784971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.784997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.785243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.785272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.785523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.785551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.785812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.785856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.786098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.786123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.786335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.786377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.786641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.786666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.786967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.787009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.787249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.787278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.787563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.787591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.787879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.787909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.788190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.788216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.788466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.788494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.788758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.788786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.789098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.789127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.789419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.789444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.789733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.789761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.790048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.790074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.790338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.790367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.790660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.790686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.790970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.790996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.791241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.791266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.791532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.791560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.791819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.791854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.792180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.792210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.792505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.792533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.792798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.792835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.793088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.793114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.793388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.793416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.793677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.793717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.794004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.794030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.794245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.794270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.794567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.794595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.794835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.794861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.795172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.795197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.795475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.795500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.795799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.795843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.796098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.796126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.796486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.796527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.796868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.796895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.797371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.797416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.797728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.797758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.798010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.798037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.798267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.798298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.798608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.798636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.798907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.798937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.799205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.799233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.215 [2024-07-20 18:09:27.799481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.215 [2024-07-20 18:09:27.799505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.215 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.799752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.799778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.800023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.800048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.800358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.800387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.800677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.800701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.801003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.801029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.801349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.801414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.801677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.801705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.802001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.802027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.802273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.802301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.802573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.802602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.802878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.802908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.803166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.803192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.803467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.803496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.803923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.803952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.804224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.804252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.804546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.804571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.804866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.804892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.805402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.805445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.805727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.805757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.806043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.806070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.806343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.806371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.806635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.806663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.806952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.806986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.807231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.807256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.807546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.807574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.807837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.807866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.808110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.808139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.808399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.808425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.808727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.808756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.809031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.809057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.809335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.809364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.809614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.809639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.810064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.810095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.810362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.810391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.810680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.810710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.810977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.811003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.811255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.811283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.811570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.811595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.811878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.811909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.812173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.812198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.812455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.812483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.812780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.812816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.813064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.216 [2024-07-20 18:09:27.813091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.216 qpair failed and we were unable to recover it. 00:33:53.216 [2024-07-20 18:09:27.813358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.813383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.813683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.813711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.813994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.814023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.814350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.814420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.814721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.814746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.815037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.815064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.815395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.815432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.815907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.815936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.816202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.816227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.816496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.816525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.816800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.816829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.817098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.817126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.817410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.817436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.817755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.817781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.818101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.818129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.818404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.818429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.818676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.818702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.818949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.818978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.819266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.819294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.819562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.819601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.819933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.819973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.820262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.820290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.820572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.820600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.820896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.820922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.821156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.821195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.821479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.821507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.821803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.821831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.822062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.822088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.822388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.822427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.822698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.822726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.822980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.823006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.823283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.823308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.823600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.823626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.823946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.823974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.824239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.824267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.824523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.824551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.824798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.824824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.825096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.825123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.825373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.825402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.825848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.825877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.826110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.826135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.826410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.826443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.826739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.826767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.827045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.827074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.827339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.827365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.827634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.827661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.827952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.827978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.828283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.828315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.828600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.828626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.828929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.217 [2024-07-20 18:09:27.828955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.217 qpair failed and we were unable to recover it. 00:33:53.217 [2024-07-20 18:09:27.829178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.829219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.829702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.829754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.830048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.830074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.830385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.830410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.830739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.830767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.831048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.831077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.831338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.831363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.831657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.831685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.831979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.832008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.832274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.832303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.832601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.832626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.832874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.832900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.833166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.833195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.833425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.833451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.833862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.833888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.834162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.834188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.834407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.834434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.834655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.834680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.834928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.834955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.835192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.835218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.835519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.835548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.835820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.835846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.836087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.836112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.836420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.836445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.836685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.836714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.836938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.836964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.837207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.837233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.837443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.837469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.837703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.837728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.837947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.837973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.838209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.838234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.838439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.838465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.838741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.838770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.839041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.839068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.839309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.839335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.839578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.839603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.839871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.839898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.840109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.840135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.840362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.840388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.840596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.840622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.840858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.840902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.841168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.841195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.841457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.841482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.841776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.841826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.842078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.842103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.842345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.842386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.842665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.842690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.842905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.842931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.843185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.843214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.843479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.843505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.843767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.843808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.844067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.218 [2024-07-20 18:09:27.844096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.218 qpair failed and we were unable to recover it. 00:33:53.218 [2024-07-20 18:09:27.844411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.844437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.844685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.844711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.844933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.844959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.845212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.845237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.845477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.845504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.845729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.845756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.845996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.846022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.846257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.846282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.846498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.846539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.846827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.846854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.847069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.847096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.847322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.847349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.847620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.847649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.847906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.847933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.848148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.848174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.848409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.848435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.848684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.848712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.848968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.848994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.849260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.849286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.849559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.849585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.849851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.849879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.850098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.850124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.850357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.850384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.850630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.850655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.850924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.850949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.851196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.851222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.851497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.851522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.851782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.851845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.852054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.852086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.852343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.852369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.852595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.852620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.852925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.852952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.853200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.853225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.853496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.853525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.853779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.853812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.854052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.854078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.854333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.854359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.854597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.854624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.854842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.854869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.855106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.855131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.855397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.855423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.855669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.855697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.855949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.855975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.856270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.856298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.856570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.856598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.856897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.856923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.219 qpair failed and we were unable to recover it. 00:33:53.219 [2024-07-20 18:09:27.857164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.219 [2024-07-20 18:09:27.857189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.857452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.857481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.857743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.857771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.858039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.858088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.858345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.858371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.858675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.858700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.858953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.858979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.859273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.859301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.859569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.859594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.859880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.859906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.860137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.860166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.860406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.860436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.860884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.860910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.861162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.861190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.861479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.861507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.861820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.861867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.862113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.862138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.862431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.862460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.862752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.862780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.863017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.863043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.863277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.863303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.863566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.863599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.863899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.863925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.864195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.864221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.864427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.864452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.864697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.864725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.864989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.865015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.865274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.865303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.865583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.865608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.865909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.865934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.866151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.866177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.866452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.866481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.866778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.866816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.867037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.867062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.867337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.867365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.867657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.867683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.867901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.867927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.868140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.868165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.868450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.868475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.868712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.868738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.868976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.869003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.869243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.220 [2024-07-20 18:09:27.869271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.220 qpair failed and we were unable to recover it. 00:33:53.220 [2024-07-20 18:09:27.869574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.869602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.869911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.869937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.870171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.870196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.870445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.870475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.870741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.870769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.871039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.871065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.871338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.871367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.871635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.871663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.871930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.871956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.872256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.872285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.872566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.872591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.872890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.872916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.873182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.873211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.873446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.873476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.873746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.873772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.874022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.874048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.874329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.874357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.874589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.874617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.874895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.874920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.875185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.875213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.875479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.875505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.875766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.875802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.876042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.876068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.876365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.876393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.876683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.876708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.877033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.877062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.877284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.877309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.877594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.877623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.877916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.877945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.878206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.878234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.878531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.878558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.878825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.878852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.879103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.879131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.879395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.879429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.879702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.879729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.880016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.880044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.880306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.880334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.880655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.880710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.880950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.880977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.881251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.881280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.881552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.881580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.881882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.881911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.882165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.882190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.882448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.882477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.882780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.882817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.883095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.883124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.883399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.883424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.883821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.883881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.884126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.221 [2024-07-20 18:09:27.884154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.221 qpair failed and we were unable to recover it. 00:33:53.221 [2024-07-20 18:09:27.884392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.884421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.884651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.884677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.884889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.884915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.885131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.885173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.885400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.885429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.885695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.885720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.885993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.886021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.886276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.886305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.886594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.886623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.886859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.886885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.887180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.887205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.887505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.887533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.887810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.887839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.888100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.888125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.888358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.888386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.888616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.888644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.889005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.889035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.889287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.889312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.889526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.889552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.889821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.889857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.890095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.890122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.890377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.890402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.890703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.890731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.891005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.891034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.891298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.891326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.891597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.891627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.891912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.891941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.892219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.892247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.892509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.892537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.892804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.892830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.893058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.893083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.893333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.893362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.893841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.893870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.894135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.894161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.894379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.894405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.894630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.894671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.894961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.894989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.895291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.895333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.895597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.895625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.895897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.895923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.896198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.896226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.896488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.896513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.896788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.896823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.897078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.897106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.897375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.897403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.897662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.897687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.897964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.897992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.898272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.898296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.898590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.898618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.898857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.898883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.222 [2024-07-20 18:09:27.899148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.222 [2024-07-20 18:09:27.899175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.222 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.899443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.899468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.899880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.899914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.900177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.900202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.900468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.900496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.900760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.900788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.901047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.901075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.901336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.901361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.901604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.901632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.901923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.901952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.902218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.902243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.902580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.902636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.902905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.902943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.903231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.903259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.903546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.903574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.903835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.903864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.904146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.904175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.904439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.904467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.904863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.904892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.905155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.905180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.905498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.905526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.905790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.905826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.906122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.906149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.906404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.906430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.906707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.906736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.907027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.907057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.907300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.907329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.907583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.907609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.907879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.907909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.908170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.908204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.908493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.908522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.908826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.908865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.909151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.909179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.909464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.909489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.909916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.909947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.910195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.910220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.910467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.910495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.910787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.910824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.911060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.911088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.911357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.911383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.911627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.911655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.911923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.911948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.912182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.912208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.912543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.912613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.912882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.223 [2024-07-20 18:09:27.912911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.223 qpair failed and we were unable to recover it. 00:33:53.223 [2024-07-20 18:09:27.913213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.913241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.913479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.913507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.913822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.913857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.914157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.914185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.914450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.914478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.914911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.914940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.915201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.915226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.915467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.915495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.915717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.915747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.916035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.916064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.916451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.916479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.916740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.916768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.917047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.917075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.917334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.917362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.917631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.917657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.918025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.918054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.918292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.918321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.918689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.918713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.918968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.918994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.919264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.919293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.919555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.919583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.919872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.919901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.920161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.920186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.920477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.920505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.920745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.920774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.921067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.921096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.921354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.921379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.921672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.921700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.921971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.921997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.922287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.922315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.922553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.922579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.922852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.922881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.923166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.923194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.923433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.923462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.923818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.923843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.924139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.924167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.924429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.924457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.924727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.924755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.925034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.925060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.925379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.925407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.925671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.925700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.925976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.926003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.926441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.926508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.926800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.926829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.927094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.927123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.927411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.927440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.927737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.927778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.928030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.928060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.928327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.928355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.928600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.928628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.928913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.928939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.929216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.224 [2024-07-20 18:09:27.929243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.224 qpair failed and we were unable to recover it. 00:33:53.224 [2024-07-20 18:09:27.929535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.929569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.929832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.929860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.930149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.930174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.930451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.930480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.930904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.930934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.931213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.931241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.931530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.931555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.931907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.931932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.932359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.932403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.932707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.932737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.932992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.933019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.933394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.933420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.933730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.933756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.934067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.934096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.934366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.934391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.934598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.934625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.934926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.934956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.935235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.935261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.935545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.935571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.935891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.935919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.936173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.936201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.936423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.936452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.936700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.936726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.936992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.937018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.937286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.937314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.937582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.937610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.937873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.937898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.938165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.938195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.938448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.938473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.938827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.938857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.939096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.939121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.939384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.939412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.939632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.939660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.939923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.939952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.940236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.940261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.940549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.940577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.940818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.940845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.941127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.941154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.941415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.941441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.941694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.941722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.941993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.942022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.942341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.942369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.942630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.942655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.942875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.942918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.943209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.943237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.943664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.943712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.225 [2024-07-20 18:09:27.943958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-07-20 18:09:27.943984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.225 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.944221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.944249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.944543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.944571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.944843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.944874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.945142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.945167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.945536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.945560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.945831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.945860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.946151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.946176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.946454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.946483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.946911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.946940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.947236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.947264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.947537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.947565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.947916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.947956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.948208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.948237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.948500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.948529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.948819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.948848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.949135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.949160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.949417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.949445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.949730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.949759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.950056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.950082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.950299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.950325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.950612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.950640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.950905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.950934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.951194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.951222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.951509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.951534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.951838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.951867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.952106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.952134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.952422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.952451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.952915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.952940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.953168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.953193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.953465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.953493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.953756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.953784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.954070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.954095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.954357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.954385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.954648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.954677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.954925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.954954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.955245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.955270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.955557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.955586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.955853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.955882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.956149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.956177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.956454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.956478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.956739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.956767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.957100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.957125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.957358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.957383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.957731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.957802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.958180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.958223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.958497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.958527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.958773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.958823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.959087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.226 [2024-07-20 18:09:27.959112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.226 qpair failed and we were unable to recover it. 00:33:53.226 [2024-07-20 18:09:27.959411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.959445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.959708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.959734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.960031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.960060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.960299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.960323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.960593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.960621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.960893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.960919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.961135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.961159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.961583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.961634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.961913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.961940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.962212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.962239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.962503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.962531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.962947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.962971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.963261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.963289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.963580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.963605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.963894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.963923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.964188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.964214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.964456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.964485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.964754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.964782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.965056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.965084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.965371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.965397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.965644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.965672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.965961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.965987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.966311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.966339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.966863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.966888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.967134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.967159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.967431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.967459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.967745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.967774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.968042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.968072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.968372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.968401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.968662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.968690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.968979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.969008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.969350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.969393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.969657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.969685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.969952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.969978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.970289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.970317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.970854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.970879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.971168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.971197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.971469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.971497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.971761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.971789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.972064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.972089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.972363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.972391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.972685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.972713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.973000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.227 [2024-07-20 18:09:27.973029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.227 qpair failed and we were unable to recover it. 00:33:53.227 [2024-07-20 18:09:27.973317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.973343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.973618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.973646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.973941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.973966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.974258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.974287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.974553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.974582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.974856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.974882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.975173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.975201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.975467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.975495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.975885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.975932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.976170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.976198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.976486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.976515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.976811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.976845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.977153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.977178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.977460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.977489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.977751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.977779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.978028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.978055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.978293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.978318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.978566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.978595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.978876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.978902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.979148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.979173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.979411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.979437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.979676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.979705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.979969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.979999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.980260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.980288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.980571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.980597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.980921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.980947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.981205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.981235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.981500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.981538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.981805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.981834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.982129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.982157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.982418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.982446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.982718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.982747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.983012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.983037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.983270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.983296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.983606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.983634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.983888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.983924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.984217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.984258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.984544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.984569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.984853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.984882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.985134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.985162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.985423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.985448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.985712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.985740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.986034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.986060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.986351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.986376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.986714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.986742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.986982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.987019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.987291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.987323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.987590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.987618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.987884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.987910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.988210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.988238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.988496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.228 [2024-07-20 18:09:27.988524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.228 qpair failed and we were unable to recover it. 00:33:53.228 [2024-07-20 18:09:27.988847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.229 [2024-07-20 18:09:27.988877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.229 qpair failed and we were unable to recover it. 00:33:53.229 [2024-07-20 18:09:27.989169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.229 [2024-07-20 18:09:27.989198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.229 qpair failed and we were unable to recover it. 00:33:53.229 [2024-07-20 18:09:27.989473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.229 [2024-07-20 18:09:27.989501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.229 qpair failed and we were unable to recover it. 00:33:53.229 [2024-07-20 18:09:27.989798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.229 [2024-07-20 18:09:27.989827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.229 qpair failed and we were unable to recover it. 00:33:53.229 [2024-07-20 18:09:27.990063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.229 [2024-07-20 18:09:27.990091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.229 qpair failed and we were unable to recover it. 00:33:53.229 [2024-07-20 18:09:27.990401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.229 [2024-07-20 18:09:27.990446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.229 qpair failed and we were unable to recover it. 00:33:53.229 [2024-07-20 18:09:27.990715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.229 [2024-07-20 18:09:27.990753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.229 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.991026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.991066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.991593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.991637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.991923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.991952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.992203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.992233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.992487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.992516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.992780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.992816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.993075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.993116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.993376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.993401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.993675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.993704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.993971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.994000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.994268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.994296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.994522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.994547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.994828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.994859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.995123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.501 [2024-07-20 18:09:27.995151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.501 qpair failed and we were unable to recover it. 00:33:53.501 [2024-07-20 18:09:27.995478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:27.995508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:27.995774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:27.995807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:27.996039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:27.996079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:27.996316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:27.996345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:27.996596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:27.996624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:27.996887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:27.996913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:27.997199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:27.997227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:27.997770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:27.997826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:27.998124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:27.998152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:27.998435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:27.998460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:27.998912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:27.998939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:27.999207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:27.999234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:27.999717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:27.999747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.000046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.000072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.000342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.000371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.000849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.000893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.001132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.001175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.001415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.001441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.001906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.001932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.002145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.002172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.002440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.002475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.002764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.002790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.003060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.003104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.003638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.003690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.003973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.004000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.004253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.004278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.004548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.004573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.004857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.004901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.005169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.005199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.005467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.005492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.005911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.005937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.006177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.006221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.006484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.006513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.006763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.006790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.007092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.007121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.007382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.007411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.007644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.007673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.007994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.008038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.008324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.008352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.008614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.008642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.008880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.008910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.009185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.009211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.009474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.009502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.009763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.009788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.010095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.010123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.010421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.010463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.502 [2024-07-20 18:09:28.010933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.502 [2024-07-20 18:09:28.010962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.502 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.011207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.011238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.011527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.011556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.011812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.011838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.012156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.012182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.012474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.012502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.012920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.012949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.013230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.013255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.013526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.013555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.013900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.013941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.014247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.014275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.014507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.014532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.014800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.014830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.015094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.015124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.015385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.015418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.015735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.015760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.016058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.016084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.016375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.016404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.016835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.016864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.017102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.017142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.017387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.017416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.017737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.017814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.018085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.018114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.018398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.018424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.018630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.018656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.018919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.018949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.019249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.019277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.019571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.019597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.019882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.019912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.020174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.020204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.020481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.020510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.020747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.020787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.021104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.021132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.021425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.021453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.021917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.021946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.022229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.022254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.022533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.022561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.022820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.022858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.023102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.023128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.023542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.023602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.023866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.023895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.024135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.024163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.024449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.024477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.024715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.024740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.024972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.024998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.025288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.025317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.025641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.025681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.503 qpair failed and we were unable to recover it. 00:33:53.503 [2024-07-20 18:09:28.025932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.503 [2024-07-20 18:09:28.025958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.026252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.026280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.026565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.026594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.026835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.026864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.027094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.027119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.027417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.027446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.027731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.027760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.028008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.028042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.028303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.028329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.028605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.028635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.028907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.028936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.029194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.029222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.029558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.029599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.029879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.029908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.030169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.030198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.030464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.030493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.030742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.030768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.031061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.031089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.031379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.031408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.031697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.031722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.031961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.031986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.032258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.032284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.032570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.032596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.032833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.032861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.033116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.033142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.033418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.033447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.033746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.033774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.034019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.034047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.034325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.034349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.034610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.034640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.034935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.034965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.035252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.035278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.035559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.035585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.035909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.035937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.036201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.036230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.036484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.036510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.036812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.036853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.037175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.037199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.037461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.037486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.037771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.037801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.038045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.038071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.038350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.038380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.038646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.038674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.038963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.038992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.039223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.039248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.039512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.039541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.039837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.039866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.040129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.040161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.040436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.040462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.040890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.040919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.041208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.504 [2024-07-20 18:09:28.041235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.504 qpair failed and we were unable to recover it. 00:33:53.504 [2024-07-20 18:09:28.041506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.041534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.041805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.041832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.042076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.042105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.042398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.042423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.042875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.042903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.043166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.043191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.043509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.043537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.043809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.043852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.044106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.044134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.044445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.044487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.044761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.044790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.045068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.045093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.045376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.045402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.045620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.045645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.045902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.045931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.046169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.046198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.046435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.046464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.046762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.046814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.047081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.047109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.047410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.047439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.047902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.047931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.048193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.048217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.048496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.048525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.048787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.048821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.049075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.049103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.049377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.049401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.049661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.049690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.049946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.049976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.050275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.050303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.050618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.050660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.050901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.050931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.051198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.051226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.051509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.051537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.051833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.051877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.052105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.052136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.052378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.052406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.052895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.052929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.053201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.053227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.053505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.053534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.053825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.505 [2024-07-20 18:09:28.053854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.505 qpair failed and we were unable to recover it. 00:33:53.505 [2024-07-20 18:09:28.054123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.054151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.054450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.054476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.054788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.054823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.055092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.055120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.055383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.055413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.055655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.055681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.056008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.056037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.056274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.056303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.056630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.056659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.056922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.056948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.057205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.057233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.057490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.057519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.057815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.057841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.058135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.058176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.058466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.058494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.058762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.058790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.059051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.059079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.059478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.059536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.059841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.059867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.060157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.060182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.060434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.060464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.060750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.060776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.061064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.061092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.061378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.061407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.061668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.061696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.061946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.061972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.062271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.062300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.062570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.062600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.062858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.062884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.063124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.063150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.063401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.063430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.063678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.063707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.063978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.064007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.064261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.064287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.064561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.064591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.064854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.064883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.065173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.065206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.065484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.065509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.065777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.065812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.066108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.066136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.066397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.066425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.066899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.066938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.067157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.067182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.067428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.067454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.067871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.067899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.068176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.068201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.506 [2024-07-20 18:09:28.068465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.506 [2024-07-20 18:09:28.068494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.506 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.068755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.068783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.069077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.069103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.069380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.069405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.069665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.069694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.069956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.069982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.070250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.070279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.070542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.070567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.070873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.070902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.071154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.071182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.071437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.071465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.071719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.071746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.072033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.072060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.072318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.072343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.072836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.072882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.073166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.073191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.073405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.073430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.073719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.073763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.074047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.074075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.074380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.074406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.074849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.074877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.075134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.075158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.075417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.075445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.075730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.075755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.076213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.076257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.076821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.076870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.077151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.077180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.077441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.077467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.077817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.077876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.078305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.078349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.078658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.078688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.078965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.078992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.079244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.079278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.079752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.079809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.080087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.080131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.080395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.080420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.080892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.080921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.081219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.081247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.081485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.081515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.081790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.081824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.082079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.082104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.082340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.082365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.082584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.082610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.082878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.082904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.083160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.083190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.083527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.083590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.083909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.083935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.084179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.084204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.084454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.084480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.084843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.084884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.085143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.507 [2024-07-20 18:09:28.085170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.507 qpair failed and we were unable to recover it. 00:33:53.507 [2024-07-20 18:09:28.085436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.085463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.085764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.085791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.086056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.086082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.086319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.086345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.086604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.086630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.086871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.086897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.087305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.087342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.087638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.087665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.087960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.087995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.088240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.088267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.088481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.088506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.088746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.088772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.088999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.089025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.089239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.089265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.089505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.089530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.089815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.089841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.090083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.090108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.090343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.090368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.090603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.090629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.090870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.090896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.091142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.091167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.091378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.091403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.091642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.091668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.091901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.091927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.092133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.092158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.092399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.092425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.092749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.092813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.093066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.093091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.093299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.093326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.093612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.093637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.093877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.093904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.094180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.094206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.094507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.094532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.094743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.094773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.095011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.095036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.095284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.095309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.095572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.095597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.095803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.095829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.096037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.096063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.096301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.096327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.096566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.096591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.096808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.096833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.097071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.508 [2024-07-20 18:09:28.097096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.508 qpair failed and we were unable to recover it. 00:33:53.508 [2024-07-20 18:09:28.097334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.097378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.097642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.097668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.097901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.097927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.098189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.098214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.098433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.098459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.098725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.098751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.098997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.099024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.099260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.099285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.099527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.099552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.099830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.099856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.100065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.100090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.100363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.100392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.100640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.100666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.100879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.100905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.101172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.101216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.101453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.101480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.101708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.101734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.101982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.102008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.102252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.102293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.102585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.102610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.102827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.102854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.103064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.103090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.103297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.103323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.103540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.103567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.103802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.103828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.104067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.104094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.104356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.104398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.104681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.104706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.104911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.104937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.105149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.105174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.105420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.105449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.105690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.105716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.105959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.105984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.106222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.106248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.106475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.106500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.106716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.106741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.106978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.107005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.107242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.107268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.107502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.107527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.107840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.107866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.108078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.108103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.108312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.108338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.108557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.108582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.108846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.108871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.109117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.109143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.509 [2024-07-20 18:09:28.109403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.509 [2024-07-20 18:09:28.109428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.509 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.109666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.109708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.110007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.110033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.110293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.110321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.110604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.110629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.110893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.110919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.111153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.111178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.111388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.111415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.111628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.111653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.111929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.111954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.112199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.112224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.112537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.112562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.112806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.112833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.113103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.113146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.113382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.113408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.113655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.113680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.113936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.113963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.114272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.114297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.114528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.114554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.114820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.114849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.115084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.115110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.115351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.115377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.115615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.115641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.115880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.115906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.116118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.116143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.116360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.116391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.116598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.116626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.116923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.116950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.117167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.117193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.117477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.117506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.117787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.117820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.118057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.118085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.118325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.118352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.118563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.118589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.118857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.118901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.119113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.119139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.119345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.119370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.119644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.119669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.119935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.119978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.120221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.120247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.120448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.120474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.120714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.120742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.121065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.121091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.121341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.121366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.121605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.121631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.121901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.121927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.122165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.122191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.122409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.122435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.122648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.122673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.122903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.122948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.123214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.123239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.510 qpair failed and we were unable to recover it. 00:33:53.510 [2024-07-20 18:09:28.123659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.510 [2024-07-20 18:09:28.123710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.123998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.124025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.124265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.124290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.124562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.124588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.124853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.124898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.125171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.125197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.125437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.125462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.125706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.125732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.125946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.125972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.126214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.126240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.126524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.126567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.126868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.126894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.127123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.127148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.127378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.127403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.127638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.127685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.127925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.127951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.128196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.128225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.128483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.128509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.128723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.128750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.129019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.129045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.129308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.129333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.129568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.129593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.129856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.129881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.130121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.130151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.130437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.130463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.130912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.130937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.131146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.131172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.131419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.131447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.131708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.131734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.131957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.131982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.132200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.132225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.132455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.132480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.132742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.132767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.133044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.133071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.133366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.133394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.133651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.133677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.133915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.133941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.134173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.134199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.134447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.134475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.134774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.134806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.135045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.135072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.135329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.135355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.135595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.135637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.135886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.135913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.136174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.136202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.136457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.136483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.136740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.136770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.137018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.137045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.137262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.137288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.137526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.137552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.137791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.137822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.138035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.138061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.511 [2024-07-20 18:09:28.138291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.511 [2024-07-20 18:09:28.138316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.511 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.138553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.138578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.138884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.138914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.139184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.139210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.139428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.139453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.139700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.139725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.139940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.139966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.140178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.140205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.140454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.140480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.140855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.140881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.141120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.141146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.141433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.141475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.141699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.141724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.141955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.141981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.142193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.142219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.142475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.142503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.142775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.142805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.143043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.143068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.143343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.143372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.143677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.143705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.143966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.143996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.144251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.144277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.144557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.144586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.144866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.144892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.145185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.145213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.145474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.145500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.145773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.145806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.146101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.146129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.146427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.146455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.146741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.146767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.146990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.147016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.147284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.147312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.147832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.147883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.148167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.148193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.148469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.148498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.148919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.148948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.149221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.149251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.149517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.149542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.149785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.149833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.512 qpair failed and we were unable to recover it. 00:33:53.512 [2024-07-20 18:09:28.150106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.512 [2024-07-20 18:09:28.150134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.150431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.150459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.150718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.150743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.151009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.151043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.151310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.151339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.151833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.151880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.152166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.152191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.152491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.152520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.152780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.152825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.153065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.153094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.153360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.153386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.153657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.153685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.153903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.153931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.154193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.154222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.154482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.154508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.154803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.154829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.155105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.155132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.155401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.155429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.155660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.155687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.155931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.155957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.156206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.156234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.156498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.156527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.156799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.156843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.157121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.157149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.157412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.157441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.157874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.157903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.158268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.158311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.158614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.158644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.158882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.158912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.159165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.159193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.159432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.159457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.159669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.159694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.159958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.159988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.160249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.160277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.160598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.160622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.160892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.160923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.161183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.161212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.161440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.161469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.161714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.161739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.161976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.162003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.162276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.162305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.162575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.162603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.162856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.162882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.163187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.163221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.163485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.163513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.163937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.163966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.164219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.164245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.164516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.164545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.164813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.164841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.165113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.165141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.165425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.165450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.513 [2024-07-20 18:09:28.165778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.513 [2024-07-20 18:09:28.165852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.513 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.166115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.166143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.166407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.166432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.166698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.166724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.167004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.167029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.167301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.167329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.167819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.167865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.168159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.168184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.168485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.168511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.168810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.168840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.169106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.169135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.169449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.169491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.169783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.169820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.170112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.170140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.170404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.170430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.170695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.170720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.171028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.171057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.171349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.171377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.171703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.171732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.172023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.172049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.172321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.172350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.172640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.172668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.172927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.172957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.173223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.173249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.173553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.173581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.173865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.173894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.174182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.174210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.174547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.174572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.174897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.174927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.175219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.175248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.175710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.175759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.176039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.176065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.176372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.176405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.176655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.176684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.176941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.176969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.177230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.177255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.177540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.177568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.177830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.177858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.178151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.178179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.178451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.178476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.178735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.178760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.178995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.179021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.179269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.179297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.179649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.179677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.179965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.179991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.180270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.180298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.180568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.180596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.180849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.180874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.514 qpair failed and we were unable to recover it. 00:33:53.514 [2024-07-20 18:09:28.181148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.514 [2024-07-20 18:09:28.181176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.181439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.181467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.181914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.181942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.182202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.182227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.182525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.182553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.182819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.182849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.183094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.183123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.183439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.183479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.183728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.183757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.184037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.184066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.184354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.184382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.184719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.184782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.185069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.185097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.185334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.185364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.185646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.185675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.185938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.185965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.186245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.186273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.186539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.186564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.186826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.186852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.187118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.187143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.187391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.187419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.187703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.187731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.188023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.188049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.188336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.188378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.188669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.188697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.188982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.189010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.189305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.189333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.189580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.189605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.189911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.189940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.190208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.190237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.190472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.190501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.190775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.190805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.191090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.191119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.191377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.191405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.191871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.191910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.192170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.192196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.192445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.192473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.192739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.192768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.193036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.193062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.193317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.193342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.193652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.193680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.193975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.194004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.194304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.194332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.194600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.194626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.194937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.194966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.195228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.195256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.195739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.195787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.196068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.196093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.515 [2024-07-20 18:09:28.196375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.515 [2024-07-20 18:09:28.196402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.515 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.196640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.196668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.196925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.196954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.197287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.197339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.197595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.197623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.197875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.197904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.198193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.198222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.198523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.198564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.198834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.198863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.199126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.199154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.199417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.199446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.199734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.199759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.200026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.200052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.200336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.200365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.200886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.200914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.201191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.201216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.201484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.201512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.201783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.201824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.202094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.202122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.202388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.202414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.202689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.202717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.202986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.203015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.203248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.203277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.203503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.203528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.203817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.203845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.204110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.204138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.204423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.204449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.204869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.204894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.205194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.205223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.205495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.205523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.205798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.205826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.206129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.206154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.206514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.206539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.206797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.206823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.207099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.207127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.207392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.207418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.207681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.207709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.208013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.208042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.208316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.208344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.208603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.208627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.208944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.208972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.516 [2024-07-20 18:09:28.209239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.516 [2024-07-20 18:09:28.209265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.516 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.209599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.209629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.209879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.209909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.210198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.210227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.210499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.210529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.210922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.210951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.211209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.211235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.211465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.211490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.211759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.211787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.212055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.212084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.212370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.212396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.212667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.212695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.212955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.212986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.213228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.213256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.213568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.213608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.213905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.213930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.214160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.214186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.214420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.214446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.214747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.214786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.215096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.215125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.215383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.215413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.215698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.215727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.215990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.216015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.216319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.216347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.216637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.216666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.216954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.216983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.217252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.217278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.217585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.217613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.217869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.217898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.218192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.218220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.218507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.218532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.218834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.218862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.219129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.219157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.219457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.219485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.219740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.219765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.220011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.220038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.220369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.220428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.220898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.220927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.221214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.221240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.221517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.221545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.221817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.221846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.222097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.222121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.222350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.222379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.222738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.222767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.223058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.223084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.223358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.223386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.223660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.223684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.223964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.223990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.224204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.224230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.224534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.517 [2024-07-20 18:09:28.224562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.517 qpair failed and we were unable to recover it. 00:33:53.517 [2024-07-20 18:09:28.224825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.224852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.225121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.225150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.225381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.225410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.225706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.225734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.225994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.226020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.226306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.226335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.226589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.226617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.226976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.227005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.227243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.227269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.227508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.227536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.227808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.227834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.228050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.228075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.228317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.228341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.228642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.228670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.228927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.228956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.229222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.229251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.229478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.229504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.229781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.229817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.230082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.230110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.230412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.230440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.230726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.230752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.230973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.230999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.231255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.231284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.231568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.231596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.231888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.231914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.232203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.232232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.232493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.232521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.232750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.232780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.233041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.233067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.233337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.233365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.233591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.233622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.233862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.233893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.234184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.234228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.234474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.234502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.234852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.234894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.235152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.235180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.235462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.235487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.235738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.235766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.236012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.236039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.236304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.236329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.236898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.236926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.237183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.237210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.237502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.237531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.237823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.237851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.238113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.238140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.238454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.238482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.238754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.238782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.239050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.239080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.518 [2024-07-20 18:09:28.239323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.518 [2024-07-20 18:09:28.239348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.518 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.239607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.239635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.239900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.239931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.240192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.240221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.240450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.240476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.240737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.240765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.241004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.241030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.241302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.241331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.241607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.241631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.241917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.241944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.242205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.242234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.242471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.242499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.242759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.242784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.243126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.243153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.243463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.243492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.243918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.243946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.244233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.244259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.244539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.244568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.244833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.244863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.245149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.245177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.245450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.245475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.245828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.245858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.246091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.246121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.246385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.246413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.246698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.246728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.247037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.247065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.247308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.247336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.247622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.247650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.247914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.247940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.248191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.248219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.248493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.248521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.248814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.248843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.249112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.249137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.249370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.249399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.249688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.249717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.250016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.250044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.250332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.250374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.250661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.250687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.250971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.251000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.251234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.251262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.251565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.251606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.251843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.251872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.252156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.252182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.252419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.519 [2024-07-20 18:09:28.252445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.519 qpair failed and we were unable to recover it. 00:33:53.519 [2024-07-20 18:09:28.252668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.252694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.253005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.253034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.253269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.253297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.253557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.253587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.253825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.253852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.254140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.254168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.254435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.254463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.254897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.254926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.255197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.255222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.255463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.255491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.255777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.255811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.256102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.256130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.256428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.256454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.256754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.256779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.257040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.257070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.257342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.257370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.257725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.257749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.258063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.258089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.258347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.258376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.258642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.258671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.258941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.258974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.259213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.259238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.259519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.259548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.259815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.259840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.260116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.260144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.260409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.260437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.260780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.260815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.261098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.261123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.261426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.261455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.261922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.261948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.262214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.262241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.262519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.262543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.262824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.262869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.263127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.263155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.263421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.263449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.263809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.263834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.264076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.264117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.264380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.264409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.264664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.264692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.264932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.264958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.265251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.265280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.265781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.265858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.266136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.266164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.266389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.266415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.266882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.266925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.267169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.267195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.267479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.267507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.267852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.267878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.268123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.268151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.520 [2024-07-20 18:09:28.268445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.520 [2024-07-20 18:09:28.268469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.520 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.268707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.268732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.268941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.268967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.269198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.269225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.269783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.269866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.270106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.270135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.270456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.270498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.270787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.270843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.271145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.271172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.271467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.271496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.271750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.271776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.272028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.272058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.272364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.272392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.272654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.272683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.272943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.272970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.273210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.273240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.273532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.273560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.273824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.273853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.274122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.274147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.274436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.274464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.274747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.274776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.275045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.275075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.275336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.275362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.275617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.275643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.275882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.275908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.276162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.276190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.276467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.276491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.276780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.276824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.277082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.277107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.277442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.277469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.277777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.277810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.278074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.278102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.278359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.278387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.278754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.278823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.279085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.279109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.279371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.279396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.279675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.279704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.279978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.280004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.280481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.280521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.280777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.280811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.281074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.281103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.281368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.281396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.281657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.281682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.281985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.282013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.282245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.282274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.282548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.282574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.282846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.521 [2024-07-20 18:09:28.282872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.521 qpair failed and we were unable to recover it. 00:33:53.521 [2024-07-20 18:09:28.283090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.283116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.283356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.283384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.283863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.283892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.284127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.284153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.284406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.284438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.284675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.284702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.284959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.284989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.285248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.285273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.285521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.285550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.285817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.285846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.286110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.286138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.286368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.286394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.286698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.286726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.286984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.287012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.287243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.287271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.287536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.287561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.287810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.287841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.288107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.288135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.288403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.288433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.288698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.288724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.289021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.289047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.289306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.289334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.289584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.289612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.289877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.289904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.290116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.290141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.290358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.793 [2024-07-20 18:09:28.290399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.793 qpair failed and we were unable to recover it. 00:33:53.793 [2024-07-20 18:09:28.290699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.290728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.290995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.291021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.291263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.291291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.291576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.291604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.291873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.291903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.292171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.292197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.292495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.292524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.292781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.292818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.293073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.293101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.293348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.293373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.293616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.293646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.293968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.293997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.294260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.294289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.294555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.294581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.294845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.294870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.295169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.295197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.295454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.295482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.295713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.295738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.295986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.296016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.296314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.296343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.296627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.296655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.296906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.296934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.297209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.297239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.297523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.297552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.297819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.297848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.298087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.298126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.298419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.298447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.298731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.298759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.299027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.299055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.299313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.299338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.299603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.299629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.300004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.300033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.300302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.300330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.300614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.300639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.300947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.300972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.301214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.301240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.301503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.301532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.301803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.301832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.302099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.302125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.302406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.302434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.302750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.302776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.303087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.303130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.303387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.303415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.303704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.303732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.304003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.304032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.304299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.304325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.304592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.304620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.304908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.304937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.305201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.305231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.305490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.305515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.305803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.305832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.306093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.794 [2024-07-20 18:09:28.306122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.794 qpair failed and we were unable to recover it. 00:33:53.794 [2024-07-20 18:09:28.306408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.306434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.306668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.306709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.306956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.306982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.307245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.307273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.307781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.307839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.308094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.308119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.308375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.308404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.308677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.308717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.309027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.309053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.309334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.309360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.309650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.309678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.309944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.309974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.310217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.310246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.310604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.310628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.310938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.310964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.311201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.311227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.311484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.311512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.311799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.311824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.312127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.312156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.312418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.312447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.312698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.312726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.313017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.313043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.313311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.313340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.313601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.313630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.313916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.313945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.314174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.314214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.314482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.314510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.314783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.314816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.315073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.315117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.315474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.315539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.315826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.315854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.316119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.316147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.316434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.316462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.316751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.316776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.317081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.317109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.317405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.317434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.317911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.317940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.318230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.318256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.318562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.318591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.318877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.318905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.319176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.319204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.319446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.319472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.319708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.319734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.319975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.320003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.320268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.320296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.320587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.320612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.320885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.320921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.321159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.321188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.321484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.321514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.321811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.321837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.322120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.795 [2024-07-20 18:09:28.322148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.795 qpair failed and we were unable to recover it. 00:33:53.795 [2024-07-20 18:09:28.322435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.322460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.322903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.322932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.323208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.323233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.323526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.323554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.323826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.323855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.324132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.324160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.324415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.324440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.324746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.324774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.325040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.325070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.325303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.325333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.325603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.325629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.325904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.325933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.326178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.326207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.326496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.326525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.326830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.326875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.327161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.327189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.327474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.327502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.327764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.327797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.328066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.328091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.328342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.328370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.328596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.328626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.328916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.328945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.329219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.329244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.329551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.329580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.329867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.329896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.330182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.330211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.330496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.330521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.330791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.330826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.331117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.331145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.331414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.331442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.331731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.331757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.332008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.332034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.332313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.332341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.332625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.332650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.332921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.332947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.333191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.333222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.333473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.333501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.333780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.333812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.334109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.334150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.334355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.796 [2024-07-20 18:09:28.334382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.796 qpair failed and we were unable to recover it. 00:33:53.796 [2024-07-20 18:09:28.334598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.334624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.334885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.334914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.335177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.335202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.335444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.335486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.335750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.335776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.336026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.336051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.336318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.336343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.336610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.336635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.336908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.336936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.337208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.337234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.337441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.337467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.337727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.337755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.338057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.338084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.338294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.338319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.338561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.338586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.338852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.338881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.339157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.339182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.339413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.339456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.339732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.339758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.340023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.340050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.340845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.340878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.341138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.341164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.341378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.341404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.341664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.341693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.341982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.342008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.342264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.342291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.342565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.342606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.342861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.342887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.343100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.343126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.343344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.343369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.343586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.343611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.343858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.343884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.344095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.344120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.344328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.344353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.344592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.344617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.344897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.344923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.345143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.345170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.345412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.345439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.345699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.345726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.345944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.345971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.346236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.346262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.346483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.346509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.346739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.346766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.347017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.347044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.347306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.347335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.347661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.347687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.347925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.347951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.348165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.348191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.348432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.348458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.348779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.348811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.349048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.349073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.797 [2024-07-20 18:09:28.349343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.797 [2024-07-20 18:09:28.349368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.797 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.349604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.349693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.349949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.349975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.350193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.350220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.350430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.350455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.350695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.350724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.350960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.350986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.351201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.351227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.351534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.351563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.351713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22050f0 is same with the state(5) to be set 00:33:53.798 [2024-07-20 18:09:28.352015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.352053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.352300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.352326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.352582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.352623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.352879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.352905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.353118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.353143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.353421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.353461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.353786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.353817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.354053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.354078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.354488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.354529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.354883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.354908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.355212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.355241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.355651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.355697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.355940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.355965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.356237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.356265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.356737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.356784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.357049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.357083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.357351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.357379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.357725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.357750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.357969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.357995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.358260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.358288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.358786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.358817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.359085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.359113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.359371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.359399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.359771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.359803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.360022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.360048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.360325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.360353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.360806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.360832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.361074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.361099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.361484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.361548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.361836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.361863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.362106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.362134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.362405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.362451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.362697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.362723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.363000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.363029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.363293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.363322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.363872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.363912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.364200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.364227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.798 [2024-07-20 18:09:28.364593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.798 [2024-07-20 18:09:28.364638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.798 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.364939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.364967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.365242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.365269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.365514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.365539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.365805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.365832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.366098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.366131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.366420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.366448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.366805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.366831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.367126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.367153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.367388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.367416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.367803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.367828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.368096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.368124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.368406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.368434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.368739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.368763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.369041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.369066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.369372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.369400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.369745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.369770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.370030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.370068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.370389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.370418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.370732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.370782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.371051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.371082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.371519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.371549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.371817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.371843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.372077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.372121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.372423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.372474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.372943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.372970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.373175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.373202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.373549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.373591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.373874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.373899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.374141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.374167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.374395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.374421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.374808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.374834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.375041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.375088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.375357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.375385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.375859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.375884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.376097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.376123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.376508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.376563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.376881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.376907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.377172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.377200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.377435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.377463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.377806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.377849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.378066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.378108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.378396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.378424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.378717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.378759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.379049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.379075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.379308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.379333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.379582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.379607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.379900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.379927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.380196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.380224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.380657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.380707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.380973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.380999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.381252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.799 [2024-07-20 18:09:28.381280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.799 qpair failed and we were unable to recover it. 00:33:53.799 [2024-07-20 18:09:28.381541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.381567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.381886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.381912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.382144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.382173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.382430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.382456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.382722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.382787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.383054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.383079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.383291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.383317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.383774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.383848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.384064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.384090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.384324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.384350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.384870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.384897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.385157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.385186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.385426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.385451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.385666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.385691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.385905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.385931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.386192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.386217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.386486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.386515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.386777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.386813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.387076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.387101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.387367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.387395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.387675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.387714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.388006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.388031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.388334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.388376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.388642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.388670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.388929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.388955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.389251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.389279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.389552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.389580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.389869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.389895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.390151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.390181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.390470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.390496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.390760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.390785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.391092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.391117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.391416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.391444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.391907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.391933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.392201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.392228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.392510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.392538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.392833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.392876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.393130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.393160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.393449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.393478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.393890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.393916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.394173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.394202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.394491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.394519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.394783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.394823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.800 [2024-07-20 18:09:28.395124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.800 [2024-07-20 18:09:28.395153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.800 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.395391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.395419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.395671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.395697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.395945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.395972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.396217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.396246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.396491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.396517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.396722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.396747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.396966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.396993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.397231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.397256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.397526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.397555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.397859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.397885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.398129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.398155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.398533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.398582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.398865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.398893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.399169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.399193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.399456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.399484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.399779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.399813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.400141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.400169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.400471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.400499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.400787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.400816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.401087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.401112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.401392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.401420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.401718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.401746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.402015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.402040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.402268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.402311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.402573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.402602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.402873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.402899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.403221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.403249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.403517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.403545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.403863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.403908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.404171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.404200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.404472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.404501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.404816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.404858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.405114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.405142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.405405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.405435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.405701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.405727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.406027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.406056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.406346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.406375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.406624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.406649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.406929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.406958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.407184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.407208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.407457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.407483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.407753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.407781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.408068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.408097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.408448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.408492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.408750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.408779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.409080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.409108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.409391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.409417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.409696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.409724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.409982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.410011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.801 qpair failed and we were unable to recover it. 00:33:53.801 [2024-07-20 18:09:28.410293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.801 [2024-07-20 18:09:28.410333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.410600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.410629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.410893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.410922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.411205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.411230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.411514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.411542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.411841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.411869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.412280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.412340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.412563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.412597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.412875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.412904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.413142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.413168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.413463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.413491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.413895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.413924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.414209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.414234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.414524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.414552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.414817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.414845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.415127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.415152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.415449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.415477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.415742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.415769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.416035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.416062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.416361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.416390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.416676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.416705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.416998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.417024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.417299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.417328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.417572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.417600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.417897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.417922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.418183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.418211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.418478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.418506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.418753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.418801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.419054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.419083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.419329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.419357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.419637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.419663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.419956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.419982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.420227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.420255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.420583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.420607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.420920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.420949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.421250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.421278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.421525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.421550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.421789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.421824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.422065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.422093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.422422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.422465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.422727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.422757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.423056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.423085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.423339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.423363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.423613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.423639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.423906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.423935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.424186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.424211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.424541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.424569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.424814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.424848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.802 [2024-07-20 18:09:28.425184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.802 [2024-07-20 18:09:28.425209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.802 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.425480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.425508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.425876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.425905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.426176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.426200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.426541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.426569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.426850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.426878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.427176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.427201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.427466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.427496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.427799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.427849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.428102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.428127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.428419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.428447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.428701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.428730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.429027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.429053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.429329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.429358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.429649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.429676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.430045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.430074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.430338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.430366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.430655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.430683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.430944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.430970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.431253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.431281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.431817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.431862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.432107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.432132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.432412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.432440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.432697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.432725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.432993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.433019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.433310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.433338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.433599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.433629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.433919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.433946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.434220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.434249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.434535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.434563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.434871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.434897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.435181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.435209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.435742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.435790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.436038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.436063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.436329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.436354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.436610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.436636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.436886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.436912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.437147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.437172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.437418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.437446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.437710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.437741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.437978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.438005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.438226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.438250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.438516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.438542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.438852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.438880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.439147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.439175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.439432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.439457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.439724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.439749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.803 [2024-07-20 18:09:28.440002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.803 [2024-07-20 18:09:28.440031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.803 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.440291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.440316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.440600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.440628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.440924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.440950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.441203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.441228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.441538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.441567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.441848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.441877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.442160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.442185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.442478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.442505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.442881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.442909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.443152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.443177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.443412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.443437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.443671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.443697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.443936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.443962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.444204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.444237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.444527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.444556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.444819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.444845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.445143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.445171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.445434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.445462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.445725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.445751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.446072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.446113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.446350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.446378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.446660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.446699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.447000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.447026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.447306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.447334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.447586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.447611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.447871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.447901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.448198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.448226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.448487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.448513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.448818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.448844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.449118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.449147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.449418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.449443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.449721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.449758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.450065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.450094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.450466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.450534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.450804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.450833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.451066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.451094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.451348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.451374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.451675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.451703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.451999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.452028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.452296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.804 [2024-07-20 18:09:28.452320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.804 qpair failed and we were unable to recover it. 00:33:53.804 [2024-07-20 18:09:28.452586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.452614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.452905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.452934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.453185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.453210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.453517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.453545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.453824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.453852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.454232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.454278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.454520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.454549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.454779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.454814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.455101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.455126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.455405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.455434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.455882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.455911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.456351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.456419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.456723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.456766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.457034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.457062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.457306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.457332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.457617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.457645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.457888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.457918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.458196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.458220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.458532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.458560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.458847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.458876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.459167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.459209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.459475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.459503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.459790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.459837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.460198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.460242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.460548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.460579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.460884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.460910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.461204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.461246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.461515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.461543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.461828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.461858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.462153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.462192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.462493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.462520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.462822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.462853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.463286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.463349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.463627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.463656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.463945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.463972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.464227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.464252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.464547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.464575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.464878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.464904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.465190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.465215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.465497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.465527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.465777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.465814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.466109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.466150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.466448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.466474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.466760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.466789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.467060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.467085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.467403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.467431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.467672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.467713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.467952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.467977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.468253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.468282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.805 qpair failed and we were unable to recover it. 00:33:53.805 [2024-07-20 18:09:28.468520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.805 [2024-07-20 18:09:28.468548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.468818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.468862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.469194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.469223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.469515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.469544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.469804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.469833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.470100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.470128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.470393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.470421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.470710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.470753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.471025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.471052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.471308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.471337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.471829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.471876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.472144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.472170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.472446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.472474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.472739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.472763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.473114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.473139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.473451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.473479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.473900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.473941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.474222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.474250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.474544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.474572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.474887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.474913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.475179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.475208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.475503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.475531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.475939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.475968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.476244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.476272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.476508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.476537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.476802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.476828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.477101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.477129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.477421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.477450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.477727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.477751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.478033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.478059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.478358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.478386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.478677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.478702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.478994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.479020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.479397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.479462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.479899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.479940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.480211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.480241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.480505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.480534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.480816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.480843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.481147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.481176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.481440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.481468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.481893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.481934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.482251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.482280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.482558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.482585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.482870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.482896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.483165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.483193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.483482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.483510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.483924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.483949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.484222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.484250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.484537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.484565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.484887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.484927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.485219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.485247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.806 [2024-07-20 18:09:28.485513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.806 [2024-07-20 18:09:28.485541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.806 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.485828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.485870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.486256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.486299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.486616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.486656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.486933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.486960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.487351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.487406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.487696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.487724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.487986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.488012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.488294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.488323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.488618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.488647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.488901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.488927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.489218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.489247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.489522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.489551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.489904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.489930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.490180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.490206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.490424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.490466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.490732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.490803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.491101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.491129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.491418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.491460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.491909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.491935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.492186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.492212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.492482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.492511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.492802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.492828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.493079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.493107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.493341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.493369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.493639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.493664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.493961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.493987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.494278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.494306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.494565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.494590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.494933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.494957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.495245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.495274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.495690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.495738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.495999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.496025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.496300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.496328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.496564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.496588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.496836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.496865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.497153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.497180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.497483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.497508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.497942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.497972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.498268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.498297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.807 qpair failed and we were unable to recover it. 00:33:53.807 [2024-07-20 18:09:28.498612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.807 [2024-07-20 18:09:28.498636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.498924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.498951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.499511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.499574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.499936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.499963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.500278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.500307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.500600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.500628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.500946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.500972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.501258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.501287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.501573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.501600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.501866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.501891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.502178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.502208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.502466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.502494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.502783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.502816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.503104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.503133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.503368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.503396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.503853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.503887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.504173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.504200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.504466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.504494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.504882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.504912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.505199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.505239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.505500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.505529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.505871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.505913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.506186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.506215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.506505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.506533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.506894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.506919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.507235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.507264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.507550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.507578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.507884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.507909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.508178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.508207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.508470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.508498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.508813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.508857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.509146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.509174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.509463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.509491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.509806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.509849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.510154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.510181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.510475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.510503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.510912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.510937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.511238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.511267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.511529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.511562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.511905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.511930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.512210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.512239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.512531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.512571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.512783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.512812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.513085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.513114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.513401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.513429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.513908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.513933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.514215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.514244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.514534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.514562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.514820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.514862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.515199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.515228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.515489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.515518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.515837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.515863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.516127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.516155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.516424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.808 [2024-07-20 18:09:28.516451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.808 qpair failed and we were unable to recover it. 00:33:53.808 [2024-07-20 18:09:28.516770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.516829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.517130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.517158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.517392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.517420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.517674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.517700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.517966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.517996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.518268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.518296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.518576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.518602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.518887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.518913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.519131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.519157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.519395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.519421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.519753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.519781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.520041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.520069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.520380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.520422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.520691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.520719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.520956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.520986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.521273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.521298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.521616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.521644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.521883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.521911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.522175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.522201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.522478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.522508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.522774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.522808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.523236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.523300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.523602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.523633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.523936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.523976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.524231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.524262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.524544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.524574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.524865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.524894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.525151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.525191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.525431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.525459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.525762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.525790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.526065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.526092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.526339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.526368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.526654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.526683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.526945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.526971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.527256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.527284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.527576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.527604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.527884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.527909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.528114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.528139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.528441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.528470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.528719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.528744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.529011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.529037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.529291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.529319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.529574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.529600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.529922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.529948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.530200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.530228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.530512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.530537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.530822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.530862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.531277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.531349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.809 [2024-07-20 18:09:28.531896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.809 [2024-07-20 18:09:28.531938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.809 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.532195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.532225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.532515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.532544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.532815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.532842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.533118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.533146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.533372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.533400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.533660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.533685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.533935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.533962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.534254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.534282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.534536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.534562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.534867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.534893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.535157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.535185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.535565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.535619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.535923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.535948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.536239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.536268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.536601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.536625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.536931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.536965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.537234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.537263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.537525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.537550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.537826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.537855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.538120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.538149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.538436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.538462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.538761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.538791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.539098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.539127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.539416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.539442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.539747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.539775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.540037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.540063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.540350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.540374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.540681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.540709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.540951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.540979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.541271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.541297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.541601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.541626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.541912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.541938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.542243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.542285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.542566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.542605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.542911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.542937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.543194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.543219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.543495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.543524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.543788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.543837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.544283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.544326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.544637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.544668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.544954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.544994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.545366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.545395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.545687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.545716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.545981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.546007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.546270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.546295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.546569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.546599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.546876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.546903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.547276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.547313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.547632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.547663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.547921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.547949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.548241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.810 [2024-07-20 18:09:28.548266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.810 qpair failed and we were unable to recover it. 00:33:53.810 [2024-07-20 18:09:28.548510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.548539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.548814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.548859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.549173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.549198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.549468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.549498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.549757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.549809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.550151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.550176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.550564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.550620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.550886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.550912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.551164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.551188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.551491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.551520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.551834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.551860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.552174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.552216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.552512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.552540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.552841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.552868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.553150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.553175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.553462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.553491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.553926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.553952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.554204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.554230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.554499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.554529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.554775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.554830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.555278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.555321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.555629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.555660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.555947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.555978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.556240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.556266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.556541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.556569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.556845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.556871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.557110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.557135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.557428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.557457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.557743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.557771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.558110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.558150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.558443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.558472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.558755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.558785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.559061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.559087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.559382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.559407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.559780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.559816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.560094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.560121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.560377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.560405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.560708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.560735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.561124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.561168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.561483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.561513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.561810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.561854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.811 [2024-07-20 18:09:28.562195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.811 [2024-07-20 18:09:28.562225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.811 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.562511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.562539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.562818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.562848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.563112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.563143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.563432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.563460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.563698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.563739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.564038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.564065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.564347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.564375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.564598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.564627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.564900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.564926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.565186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.565214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.565475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.565503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.565921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.565946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.566215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.566243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.566511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.566539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.566838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.566864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.567133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.567161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.567408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.567436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.567840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.567866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.568123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.568150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.568385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.568413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.568642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.568668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.568949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.568975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.569250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.569279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.569598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.569626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.569911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.569938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.570247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.570275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.570529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.570555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.570845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.570874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.571162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.571187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.571402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.571427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.571690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.571719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.572003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.572028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.572271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.572297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.572566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.572593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.572851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.572878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.573144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.573170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.573454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.573482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.573749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.573777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.574053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.574079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.574353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.574381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.574666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.574694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.574979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.575022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.575291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.575325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:53.812 [2024-07-20 18:09:28.575558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.812 [2024-07-20 18:09:28.575588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:53.812 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.575871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.575898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.576139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.576164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.576398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.576423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.576637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.576663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.576885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.576927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.577218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.577247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.577510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.577535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.577814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.577856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.578072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.578117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.578357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.578383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.578638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.578667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.578964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.578991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.579253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.579279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.579558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.579587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.579870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.579896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.580160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.580186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.580447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.580476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.580709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.580738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.580999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.581025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.581328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.581356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.581640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.581668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.581925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.581951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.582220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.582249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.582529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.582557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.582825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.582866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.583126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.583154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.583444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.084 [2024-07-20 18:09:28.583472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.084 qpair failed and we were unable to recover it. 00:33:54.084 [2024-07-20 18:09:28.583913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.583939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.584233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.584262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.584547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.584575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.584862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.584888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.585174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.585202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.585474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.585502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.585790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.585824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.586101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.586130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.586420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.586449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.586698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.586723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.586976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.587003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.587323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.587396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.587899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.587925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.588162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.588188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.588483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.588512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.588786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.588830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.589070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.589098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.589386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.589411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.589695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.589721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.589997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.590024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.590302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.590330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.590583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.590607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.590955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.590981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.591229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.591257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.591570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.591610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.591892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.591918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.592189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.592216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.592477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.592503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.592812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.592840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.593102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.593130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.593386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.593412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.593811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.593887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.594179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.594207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.594460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.594485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.594859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.594884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.595163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.595190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.595512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.595555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.595813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.595855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.596075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.596101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.596331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.596356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.596608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.596636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.596899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.596929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.597194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.597220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.597484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.597514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.597778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.597809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.598147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.598176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.598547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.598572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.598884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.598911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.599132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.599158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.599431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.599460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.599810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.085 [2024-07-20 18:09:28.599872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.085 qpair failed and we were unable to recover it. 00:33:54.085 [2024-07-20 18:09:28.600167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.600196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.600446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.600474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.600771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.600826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.601069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.601108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.601367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.601395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.601694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.601722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.601994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.602020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.602302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.602331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.602685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.602710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.603055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.603081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.603343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.603373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.603613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.603641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.603870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.603896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.604157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.604185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.604457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.604485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.604747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.604773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.605051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.605092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.605339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.605366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.605601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.605642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.605978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.606004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.606339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.606363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.606603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.606629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.606906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.606947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.607231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.607259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.607584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.607608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.607905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.607939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.608229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.608257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.608522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.608547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.608766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.608818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.609073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.609100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.609370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.609395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.609639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.609667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.609929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.609954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.610309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.610337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.610570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.610600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.610859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.610886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.611131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.611157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.611429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.611457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.611719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.611745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.612122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.612186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.612473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.612511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.612815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.612845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.613087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.613127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.613493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.613522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.613783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.613822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.614111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.614137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.614437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.614466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.614734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.614762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.615040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.615066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.086 qpair failed and we were unable to recover it. 00:33:54.086 [2024-07-20 18:09:28.615360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.086 [2024-07-20 18:09:28.615389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.615652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.615681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.615949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.615977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.616220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.616249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.616539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.616567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.616925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.616955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.617225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.617253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.617492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.617521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.617815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.617858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.618098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.618123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.618429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.618457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.618900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.618926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.619162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.619188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.619459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.619487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.619749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.619775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.620003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.620030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.620302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.620331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.620818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.620874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.621136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.621164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.621455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.621483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.621770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.621803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.622109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.622137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.622405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.622432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.622715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.622740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.623004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.623033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.623293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.623321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.623607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.623633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.623917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.623943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.624151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.624193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.624506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.624535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.624777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.624818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.625098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.625131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.625657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.625706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.625963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.625989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.626278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.626307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.626570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.626595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.626871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.626900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.627192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.627220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.627479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.627504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.627739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.627769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.628046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.628074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.087 [2024-07-20 18:09:28.628343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.087 [2024-07-20 18:09:28.628370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.087 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.628697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.628726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.629001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.629027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.629276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.629317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.629553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.629583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.629862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.629888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.630125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.630166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.630440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.630469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.630734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.630762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.631012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.631038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.631270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.631312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.631604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.631632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.631908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.631934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.632226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.632254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.632540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.632568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.632845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.632871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.633095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.633122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.633422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.633449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.633932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.633956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.634227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.634255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.634522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.634550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.634813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.634839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.635093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.635122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.635401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.635429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.635680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.635705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.635977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.636006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.636298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.636326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.636625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.636665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.636952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.636978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.637276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.637304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.637816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.637875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.638128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.638157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.638409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.638438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.638670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.638696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.638924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.638969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.639208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.639236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.639499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.639525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.639799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.639828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.640089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.640117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.640374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.640399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.640688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.640718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.640944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.640970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.641219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.641245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.641529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.641557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.641808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.641836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.642124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.642149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.642432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.642460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.642728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.642756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.643037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.643063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.088 qpair failed and we were unable to recover it. 00:33:54.088 [2024-07-20 18:09:28.643358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.088 [2024-07-20 18:09:28.643387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.643653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.643681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.643915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.643941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.644209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.644238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.644508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.644537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.644875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.644901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.645138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.645163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.645379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.645419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.645876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.645902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.646150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.646176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.646384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.646410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.646674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.646700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.646973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.647002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.647259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.647288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.647582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.647607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.647850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.647880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.648143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.648172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.648435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.648461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.648867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.648892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.649199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.649227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.649779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.649832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.650161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.650190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.650438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.650466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.650815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.650857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.651117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.651147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.651437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.651466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.651912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.651937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.652233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.652261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.652549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.652577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.652868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.652895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.653373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.653432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.653713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.653744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.654020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.654046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.654325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.654353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.654620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.654650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.654987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.655013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.655420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.655469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.655798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.655827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.656069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.656095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.656369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.656397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.656665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.656693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.657017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.657046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.657314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.657342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.657830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.657887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.658171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.658196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.658518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.658547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.658790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.658832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.659194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.659250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.659533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.659570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.089 [2024-07-20 18:09:28.659816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.089 [2024-07-20 18:09:28.659847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.089 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.660124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.660149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.660426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.660454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.660727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.660755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.661009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.661035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.661310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.661338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.661826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.661884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.662136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.662161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.662386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.662411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.662683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.662711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.662997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.663022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.663275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.663303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.663538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.663566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.663847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.663873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.664150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.664178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.664419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.664447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.664787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.664816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.665111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.665139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.665412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.665440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.665697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.665723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.665976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.666004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.666243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.666273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.666498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.666538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.666855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.666884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.667149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.667173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.667441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.667482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.667726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.667754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.668049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.668078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.668400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.668424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.668715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.668743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.669014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.669040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.669293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.669319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.669630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.669658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.669925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.669954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.670192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.670217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.670500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.670528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.670783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.670817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.671078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.671103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.671350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.671378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.671605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.671638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.671880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.671905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.672172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.672199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.672494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.672519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.672881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.672906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.673168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.673197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.673515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.673545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.673826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.673852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.674095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.674123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.674406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.674447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.674679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.674704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.675034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.675063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.675324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.090 [2024-07-20 18:09:28.675353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.090 qpair failed and we were unable to recover it. 00:33:54.090 [2024-07-20 18:09:28.675632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.675658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.675955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.675983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.676275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.676303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.676587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.676612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.676924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.676953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.677248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.677276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.677621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.677659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.677943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.677972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.678292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.678316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.678561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.678586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.678819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.678845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.679113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.679141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.679395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.679420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.679702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.679730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.680007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.680036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.680433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.680480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.680755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.680782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.681052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.681080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.681386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.681425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.681664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.681692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.681934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.681963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.682252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.682292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.682601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.682629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.682892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.682921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.683205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.683245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.683512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.683540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.683804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.683832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.684098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.684128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.684401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.684429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.684705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.684733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.684989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.685014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.685262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.685292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.685833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.685863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.686142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.686167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.686458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.686487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.686764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.686799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.687063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.687089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.687399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.687427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.687664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.687692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.687962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.687988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.688201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.688227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.091 qpair failed and we were unable to recover it. 00:33:54.091 [2024-07-20 18:09:28.688436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.091 [2024-07-20 18:09:28.688462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.688672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.688696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.688991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.689021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.689317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.689345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.689627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.689651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.689940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.689968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.690272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.690297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.690577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.690602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.690905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.690934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.691229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.691258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.691525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.691550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.691828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.691856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.692120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.692147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.692539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.692568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.692828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.692857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.693118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.693146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.693485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.693510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.693781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.693815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.694108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.694136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.694421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.694446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.694706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.694735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.694986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.695016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.695267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.695294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.695566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.695594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.695859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.695888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.696158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.696183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.696465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.696498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.696839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.696868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.697125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.697150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.697450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.697478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.697818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.697848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.698105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.698131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.698423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.698451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.698738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.698766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.699211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.699270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.699576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.699606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.699857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.699883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.700122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.700147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.700482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.700548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.700837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.700866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.701118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.701143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.701441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.701536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.701806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.701835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.702076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.702103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.702349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.702378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.702641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.702669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.702939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.702965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.703292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.703357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.703652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.703681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.703970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.092 [2024-07-20 18:09:28.704011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.092 qpair failed and we were unable to recover it. 00:33:54.092 [2024-07-20 18:09:28.704309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.704337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.704602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.704630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.705004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.705033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.705339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.705367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.705835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.705885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.706146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.706172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.706377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.706404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.706661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.706689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.706949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.706976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.707293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.707366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.707664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.707692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.707959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.707985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.708265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.708293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.708582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.708611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.708901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.708927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.709199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.709224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.709472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.709507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.709783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.709813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.710117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.710145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.710408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.710436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.710719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.710745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.711061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.711090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.711341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.711369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.711686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.711710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.711997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.712026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.712296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.712321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.712642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.712667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.712969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.712995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.713231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.713257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.713535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.713576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.713845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.713874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.714135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.714163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.714452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.714494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.714784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.714818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.715083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.715112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.715433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.715458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.715917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.715946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.716211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.716240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.716472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.716513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.716814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.716843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.717107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.717136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.717417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.717442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.717905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.717934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.718215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.718243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.718583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.718651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.718886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.718915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.719160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.719190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.719452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.719478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.719716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.719744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.720013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.720042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.720300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.720325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.093 qpair failed and we were unable to recover it. 00:33:54.093 [2024-07-20 18:09:28.720594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.093 [2024-07-20 18:09:28.720623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.720915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.720944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.721212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.721237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.721487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.721515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.721810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.721852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.722114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.722144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.722427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.722455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.722727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.722755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.722999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.723024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.723291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.723319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.723580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.723609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.723972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.724000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.724268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.724297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.724585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.724613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.724890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.724916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.725208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.725236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.725760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.725819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.726085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.726110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.726458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.726485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.726755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.726784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.727068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.727094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.727376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.727405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.727703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.727731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.727998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.728024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.728304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.728340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.728604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.728634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.728944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.728970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.729282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.729311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.729741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.729791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.730039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.730065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.730355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.730383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.730647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.730675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.730970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.730996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.731263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.731292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.731560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.731588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.731887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.731913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.732213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.732238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.732540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.732566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.732915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.732955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.733179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.733207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.733738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.733789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.734094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.734119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.734402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.734426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.734692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.734719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.734937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.734962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.735385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.094 [2024-07-20 18:09:28.735441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.094 qpair failed and we were unable to recover it. 00:33:54.094 [2024-07-20 18:09:28.735922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.735951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.736216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.736241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.736551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.736579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.736877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.736906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.737188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.737213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.737483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.737512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.737815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.737846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.738100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.738125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.738502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.738557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.738856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.738885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.739330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.739373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.739648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.739678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.739944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.739971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.740229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.740255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.740564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.740592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.740877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.740906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.741135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.741161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.741412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.741437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.741854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.741884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.742155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.742181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.742460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.742486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.742769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.742806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.743115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.743156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.743421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.743453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.743930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.743955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.744223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.744248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.744561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.744590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.744883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.744912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.745196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.745237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.745523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.745551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.745797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.745826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.746063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.746088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.746341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.746369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.746638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.746666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.746926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.746951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.747243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.747271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.747562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.747590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.747854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.747880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.748171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.748199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.748474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.748508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.748827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.748867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.749131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.749160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.749697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.749746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.750009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.750034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.750289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.750316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.750605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.750633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.750890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.750916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.751160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.751188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.751442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.751471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.751910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.095 [2024-07-20 18:09:28.751935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.095 qpair failed and we were unable to recover it. 00:33:54.095 [2024-07-20 18:09:28.752178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.752203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.752458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.752485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.752757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.752782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.753116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.753144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.753431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.753459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.753689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.753715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.754006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.754035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.754299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.754327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.754614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.754654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.754955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.754984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.755246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.755276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.755508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.755533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.755789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.755821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.756094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.756123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.756383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.756408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.756648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.756676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.756953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.756982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.757264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.757305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.757571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.757599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.757850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.757879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.758158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.758199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.758457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.758482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.758765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.758799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.759061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.759087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.759382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.759407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.759689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.759717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.759998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.760025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.760382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.760407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.760681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.760710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.761001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.761031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.761308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.761338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.761859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.761888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.762174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.762199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.762469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.762497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.762760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.762789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.763071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.763097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.763389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.763417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.763684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.763712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.763984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.764010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.764294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.764322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.764609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.764637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.764898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.764925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.765267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.765334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.765811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.765858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.766091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.766117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.766372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.766400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.766659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.766688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.766955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.766981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.767261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.767289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.767566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.767591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.767832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.096 [2024-07-20 18:09:28.767857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.096 qpair failed and we were unable to recover it. 00:33:54.096 [2024-07-20 18:09:28.768133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.768161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.768424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.768452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.768714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.768739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.769025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.769051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.769333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.769362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.769628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.769654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.769901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.769931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.770219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.770247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.770528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.770569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.770839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.770868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.771154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.771183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.771441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.771466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.771736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.771765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.772039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.772064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.772367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.772392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.772681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.772709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.772978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.773003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.773235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.773260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.773501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.773535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.773788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.773822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.774077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.774103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.774376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.774405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.774669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.774697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.774960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.774986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.775252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.775280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.775568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.775597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.775901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.775928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.776221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.776249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.776517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.776547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.776836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.776861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.777166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.777194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.777442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.777470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.777757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.777783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.778065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.778094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.778324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.778352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.778613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.778639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.778909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.778938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.779206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.779232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.779499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.779524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.779826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.779852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.780117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.780144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.780393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.780418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.780657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.780687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.780961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.780989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.781221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.781247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.781518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.781547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.781838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.781871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.782153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.782179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.782456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.097 [2024-07-20 18:09:28.782485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.097 qpair failed and we were unable to recover it. 00:33:54.097 [2024-07-20 18:09:28.782737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.782765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.783045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.783072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.783344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.783372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.783637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.783666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.783935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.783961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.784237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.784266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.784552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.784581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.784842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.784868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.785170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.785199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.785493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.785525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.785782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.785815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.786070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.786098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.786358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.786386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.786669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.786694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.786968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.786994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.787290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.787319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.787554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.787579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.787861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.787890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.788131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.788158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.788381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.788406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.788670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.788698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.788953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.788982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.789237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.789263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.789526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.789554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.789802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.789831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.790099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.790124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.790432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.790461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.790746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.790774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.791030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.791056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.791270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.791311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.791595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.791622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.791868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.791894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.792208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.792233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.792509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.792537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.792802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.792828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.793074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.793101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.793369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.793398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.793632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.793671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.793957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.793986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.794229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.794257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.794545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.794587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.794898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.794927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.795190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.795220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.795510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.795549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.795835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.795864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.098 qpair failed and we were unable to recover it. 00:33:54.098 [2024-07-20 18:09:28.796129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.098 [2024-07-20 18:09:28.796156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.796419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.796444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.796680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.796721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.797110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.797153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.797453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.797485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.797762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.797790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.798089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.798117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.798444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.798510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.798799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.798826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.799296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.799339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.799642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.799669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.799974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.800001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.800275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.800304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.800553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.800578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.800915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.800944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.801219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.801247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.801521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.801547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.801802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.801828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.802079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.802107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.802368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.802393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.802642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.802670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.802892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.802921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.803182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.803207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.803504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.803533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.803821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.803850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.804125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.804151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.804434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.804462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.804902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.804931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.805188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.805213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.805507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.805535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.805798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.805826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.806094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.806121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.806411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.806440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.806704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.806732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.806990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.807016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.807255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.807286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.807527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.807556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.807802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.807828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.808072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.808112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.808391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.808419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.808671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.808696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.808938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.808967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.809217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.809241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.809517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.809542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.809821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.809851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.810120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.810148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.810384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.810408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.810716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.810743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.811022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.099 [2024-07-20 18:09:28.811049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.099 qpair failed and we were unable to recover it. 00:33:54.099 [2024-07-20 18:09:28.811319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.811344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.811670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.811698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.811939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.811965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.812206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.812232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.812517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.812546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.812830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.812858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.813138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.813162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.813461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.813489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.813943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.813972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.814274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.814314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.814604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.814632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.814906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.814936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.815234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.815259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.815507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.815535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.815806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.815844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.816136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.816179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.816459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.816487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.816924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.816953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.817225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.817249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.817512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.817540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.817797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.817826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.818179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.818216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.818508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.818546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.818813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.818843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.819135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.819160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.819440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.819469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.819897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.819925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.820286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.820340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.820646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.820677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.820968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.820998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.821283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.821309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.821596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.821625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.821881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.821907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.822124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.822149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.822423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.822449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.822733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.822761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1103598 Killed "${NVMF_APP[@]}" "$@" 00:33:54.100 [2024-07-20 18:09:28.823057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.823091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:54.100 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:54.100 [2024-07-20 18:09:28.823374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.823403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:54.100 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.100 [2024-07-20 18:09:28.823721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.823778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.824055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.824090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.824296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.824322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.824538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.824579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.824858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.824885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.825118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.825149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.825440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.825469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.825740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.825765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.100 qpair failed and we were unable to recover it. 00:33:54.100 [2024-07-20 18:09:28.826065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.100 [2024-07-20 18:09:28.826096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.826382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.826422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.826879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.826904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.827178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.827203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1104075 00:33:54.101 [2024-07-20 18:09:28.827469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.827498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1104075 00:33:54.101 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1104075 ']' 00:33:54.101 [2024-07-20 18:09:28.827858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.827885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.101 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:54.101 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.101 [2024-07-20 18:09:28.828163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.828204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:54.101 18:09:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.101 [2024-07-20 18:09:28.828487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.828529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.828833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.828875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.829140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.829177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.829453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.829482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.829915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.829942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.830202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.830231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.830519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.830548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.830880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.830907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.831145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.831174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.831437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.831466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.831959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.831984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.832283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.832312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.832603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.832631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.832925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.832951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.833392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.833436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.833903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.833932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.834192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.834218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.834492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.834521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.834781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.834826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.835076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.835104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.835413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.835441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.835852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.835878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.836149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.836176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.836452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.836481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.836885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.836912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.837179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.837205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.837474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.837504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.837903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.837929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.838170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.838196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.838464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.838494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.838765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.838800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.839065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.839091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.839392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.839420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.839713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.101 [2024-07-20 18:09:28.839741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.101 qpair failed and we were unable to recover it. 00:33:54.101 [2024-07-20 18:09:28.839973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.839998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.840257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.840285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.840569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.840597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.840891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.840917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.841183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.841211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.841466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.841494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.841924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.841950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.842226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.842254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.842510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.842543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.842858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.842884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.843193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.843221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.843482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.843511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.843973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.843998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.844339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.844400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.844680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.844709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.845011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.845037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.845411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.845481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.845938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.845965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.846186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.846212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.846472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.846501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.846790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.846823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.847252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.847318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.847614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.847645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.847992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.848018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.848277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.848303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.848559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.848590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.848888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.848914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.849179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.849204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.849455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.849485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.849753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.849781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.850074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.850127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.850389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.850418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.850676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.850704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.850997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.851023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.851310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.851338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.851601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.851631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.852060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.852087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.852389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.852417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.852724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.852753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.853040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.853066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.853330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.853359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.853826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.853882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.854158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.854199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.854488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.854527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.854830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.854856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.855172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.855198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.102 [2024-07-20 18:09:28.855451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.102 [2024-07-20 18:09:28.855477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.102 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.855922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.855963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.856243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.856271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.856531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.856562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.856807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.856851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.857115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.857140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.857402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.857432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.857897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.857937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.858177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.858203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.858454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.858483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.858755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.858787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.859061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.859088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.859382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.859411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.859649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.859678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.859968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.859994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.860264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.860293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.860562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.860591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.860856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.860881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.861159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.861189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.861433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.861462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.861720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.861745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.862036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.862062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.862378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.862406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.862666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.862693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.862972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.863001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.863225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.863255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.863512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.863538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.863833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.863862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.864121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.864150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.864402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.864427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.864717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.864745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.865003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.865032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.865305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.865330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.865582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.865610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.865903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.865932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.866163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.866189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.103 [2024-07-20 18:09:28.866490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.103 [2024-07-20 18:09:28.866515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.103 qpair failed and we were unable to recover it. 00:33:54.378 [2024-07-20 18:09:28.866773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-07-20 18:09:28.866812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-07-20 18:09:28.867074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-07-20 18:09:28.867099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-07-20 18:09:28.867490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-07-20 18:09:28.867539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-07-20 18:09:28.867806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-07-20 18:09:28.867835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-07-20 18:09:28.868077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-07-20 18:09:28.868103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-07-20 18:09:28.868407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-07-20 18:09:28.868441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-07-20 18:09:28.868705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-07-20 18:09:28.868734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-07-20 18:09:28.868998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.378 [2024-07-20 18:09:28.869025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.378 qpair failed and we were unable to recover it. 00:33:54.378 [2024-07-20 18:09:28.869304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.869332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.869735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.869764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.870003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.870029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.870292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.870320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.870609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.870638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.870880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.870907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.871208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.871234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.871473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.871499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.871869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.871896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.872112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.872138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.872347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.872389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.872587] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:54.379 [2024-07-20 18:09:28.872635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.872665] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:[2024-07-20 18:09:28.872665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b95 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.379 0 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.872926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.872954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.873249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.873278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.873526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.873553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.873818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.873847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.874084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.874112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.874372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.874398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.874647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.874673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.874913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.874943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.875228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.875254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.875531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.875560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.875847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.875874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.876127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.876154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.876458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.876486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.876963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.876993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.877225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.877251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.877458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.877501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.877920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.877949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.878171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.878196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.878498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.878526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.878825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.878855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.879094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.879120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.879376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.879405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.879841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.879872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.880162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.880188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.880503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.880535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.880807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.880836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.881066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.881107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.881373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.881402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.881861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.881890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.882171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.882196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.882425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.882450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.882747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.882777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.883027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.883055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.883295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.883323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.883553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.883582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.379 [2024-07-20 18:09:28.883861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.379 [2024-07-20 18:09:28.883887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.379 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.884124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.884152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.884458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.884484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.884710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.884736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.884976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.885003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.885283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.885311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.885557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.885584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.885825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.885867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.886139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.886165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.886421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.886447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.886804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.886847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.887108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.887137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.887513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.887566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.887846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.887873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.888139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.888168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.888554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.888607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.888886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.888912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.889187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.889215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.889494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.889519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.889803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.889847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.890067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.890113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.890370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.890395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.890656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.890685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.890958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.890985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.891217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.891243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.891494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.891522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.891786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.891821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.892074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.892099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.892370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.892398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.892655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.892690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.892961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.892987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.893227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.893252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.893478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.893503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.893753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.893781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.894045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.894071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.894353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.894382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.894652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.894678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.895016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.895042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.895286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.895311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.895554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.895580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.895799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.895825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.896068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.896094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.896348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.896374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.897919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.897949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.898283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.898352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.898610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.898639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.898894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.898921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.899147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.899173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.899388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.899414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.899683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.380 [2024-07-20 18:09:28.899709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.380 qpair failed and we were unable to recover it. 00:33:54.380 [2024-07-20 18:09:28.899986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.900013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.900285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.900310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.900527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.900555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.900873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.900900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.901134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.901160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.901371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.901397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.901661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.901688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.901908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.901935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.902147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.902189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.902742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.902809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.903066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.903092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.903358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.903386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.903648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.903677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.903983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.904010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.904284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.904312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.904594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.904622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.904890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.904916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.905131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.905172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.905461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.905489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.906860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.906894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.907165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.907195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.907438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.907466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.907788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.907857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.908167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.908195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.908476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.908504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.381 [2024-07-20 18:09:28.909268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.909302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.909575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.909606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.909886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.909913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.910145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.910171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.910490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.910518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.910779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.910815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.911419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.911451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.911747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.911799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.912053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.912079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.912301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.912327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.912545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.912571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.912830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.912856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.913081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.913106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.913338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.913362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.913623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.913648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.913888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.913914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.914122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.914149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.914375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.914401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.914616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.914641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.914878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.914904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.915116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.915141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.915353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.915379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.915601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.915626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.381 qpair failed and we were unable to recover it. 00:33:54.381 [2024-07-20 18:09:28.915877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.381 [2024-07-20 18:09:28.915903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.916150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.916175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.916415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.916442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.916664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.916690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.916933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.916959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.917172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.917200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.917487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.917512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.917724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.917750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.917986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.918012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.918259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.918284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.918554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.918579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.918852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.918878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.919121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.919148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.919347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.919373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.919604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.919629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.919852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.919879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.920117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.920142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.920375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.920402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.920679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.920706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.920922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.920948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.921163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.921188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.921404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.921429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.921670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.921696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.921944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.921969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.922184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.922214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.922480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.922506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.922741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.922768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.922995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.923023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.923269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.923295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.923514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.923540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.923789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.923830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.924049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.924076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.924319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.924345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.924617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.924642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.924875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.924902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.925175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.925201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.925454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.925481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.925726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.925752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.925975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.926001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.926254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.382 [2024-07-20 18:09:28.926279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.382 qpair failed and we were unable to recover it. 00:33:54.382 [2024-07-20 18:09:28.926566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.926591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.926834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.926860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.927077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.927104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.927386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.927412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.927618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.927645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.927864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.927891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f584c000b90 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.928132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.928172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.928419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.928445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.928693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.928719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.928952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.928978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.929192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.929220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.929497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.929528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.929755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.929780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.930031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.930056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.930298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.930323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.930560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.930585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.930827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.930852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.931067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.931096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.931374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.931399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.931609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.931636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.931881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.931907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.932113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.932140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.932421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.932446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.932709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.932734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.932985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.933012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.933285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.933311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.933582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.933608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.933845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.933872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.934131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.934156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.934438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.934464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.934694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.934720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.934961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.934986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.935206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.935232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.935437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.935464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.935689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.935714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.935961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.935987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.936209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.936235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.936500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.936526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.936799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.936829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.937047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.937073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.937322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.937348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.937622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.937648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.937921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.937946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.938229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.938255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.938507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.938533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.938798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.938823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.939069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.939106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.939362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.939388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.939629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.939654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.939901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.939927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.940198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.940223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.383 [2024-07-20 18:09:28.940434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.383 [2024-07-20 18:09:28.940459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.383 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.940726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.940752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f7570 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.941001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.941044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.941302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.941330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.941602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.941629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.941873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.941901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.942700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.942741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.942996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.943023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.943248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.943288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.943549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.943577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.943799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.943827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.944071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.944114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.944221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:54.384 [2024-07-20 18:09:28.944391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.944417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.944643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.944669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.944888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.944916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.945136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.945162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.945381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.945406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.945623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.945649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.945860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.945887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.946127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.946168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.946454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.946480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.947492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.947533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.947831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.947859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.948109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.948135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.948392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.948419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.949230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.949270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.949618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.949645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.950172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.950231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.950585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.950614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.950875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.950904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.951146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.951177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.951433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.951459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.951805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.951847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.952650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.952690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.953577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.953617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.953910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.953940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.954159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.954186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.954498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.954524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.954865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.954892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.955912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.955942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.956197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.956224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.956520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.956546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.956759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.956785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.957011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.957037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.957314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.957340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.957564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.957593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.957827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.957854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.958078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.958110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.958355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.958383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.958623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.958649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.958872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.958898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.384 [2024-07-20 18:09:28.959133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.384 [2024-07-20 18:09:28.959158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.384 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.959389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.959415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.959664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.959689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.959998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.960024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.960263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.960288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.960504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.960530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.960799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.960827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.961114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.961157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.961434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.961460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.961700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.961726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.961950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.961976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.962195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.962221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.962455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.962481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.962730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.962756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.962988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.963015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.963301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.963327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.963608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.963638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.963890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.963918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.964156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.964183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.965123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.965165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.965467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.965496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.965758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.965784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.966076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.966102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.966388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.966414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.966665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.966692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.966944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.966970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.967193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.967220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.967436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.967462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.967705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.967732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.967991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.968019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.968267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.968294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.968507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.968535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.968832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.968860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.969080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.969117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.969360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.969385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.969639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.969666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.969942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.969968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.970240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.970266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.970607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.970633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.970882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.970909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.971117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.971144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.385 [2024-07-20 18:09:28.971388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.385 [2024-07-20 18:09:28.971414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.385 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.971691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.971717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.971950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.971977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.972216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.972242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.972512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.972537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.972769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.972800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.973023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.973048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.973292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.973317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.973524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.973550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5850000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.973824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.973864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.974102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.974131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.974410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.974438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.974680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.974706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.974942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.974969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.975245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.975272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.975540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.975572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.975818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.975845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.976055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.976081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.976335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.976361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.976604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.976631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.976848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.976876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.977094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.977121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.977402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.977428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.977692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.977718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.977955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.977982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.978191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.978227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.978465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.978491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.978750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.978775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.978993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.979019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.979268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.979294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.979519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.979545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.979776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.979809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.980022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.980049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.980295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.980322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.980535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.980561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.980769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.980799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.981014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.981041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.981325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.981351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.981564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.981589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.981838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.981864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.982085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.982112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.982356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.982382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.982660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.982687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.982940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.982968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.983171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.983198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.983419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.983448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.983695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.983724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.983951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.983979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.386 [2024-07-20 18:09:28.984192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.386 [2024-07-20 18:09:28.984218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.386 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.984422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.984448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.984691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.984718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.984950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.984977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.985184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.985210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.985491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.985517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.985791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.985828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.986051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.986081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.986316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.986343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.986580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.986606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.986877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.986904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.987130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.987157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.987392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.987419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.987684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.987710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.987933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.987961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.988229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.988255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.988521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.988547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.988814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.988841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.989062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.989088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.989361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.989387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.989603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.989629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.989850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.989878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.990090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.990122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.990386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.990412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.990676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.990702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.990922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.990949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.991159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.991187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.991431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.991458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.991679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.991706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.991949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.991976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.992202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.992228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.992470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.992495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.992730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.992757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.993002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.993029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.993314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.993341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.993586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.993613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.993842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.993869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.994085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.994110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.994345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.994371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.994575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.994601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.994814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.994840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.995051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.995078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.995315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.995340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.995629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.995655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.995902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.995929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.996153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.996179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.996417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.996443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.996835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.996866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.997075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.997101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.997321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.997347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.997551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.997578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.997825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.997851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.387 qpair failed and we were unable to recover it. 00:33:54.387 [2024-07-20 18:09:28.998069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.387 [2024-07-20 18:09:28.998105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:28.998316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:28.998343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:28.998606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:28.998633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:28.998898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:28.998924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:28.999136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:28.999164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:28.999407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:28.999433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:28.999733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:28.999760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:28.999986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.000013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.000420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.000446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.000735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.000762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.000991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.001017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.001318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.001344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.001606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.001633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.001864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.001891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.002354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.002394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.002641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.002667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.002918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.002946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.003171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.003198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.003435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.003463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.003676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.003702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.004014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.004041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.004374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.004401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.004677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.004704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.005254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.005308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.005615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.005644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.006055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.006098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.006337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.006365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.006589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.006617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.006886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.006913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.007131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.007157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.007381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.007408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.007655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.007682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.007942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.007968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.008185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.008211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.008430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.008457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.008673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.008705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.008945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.008971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.009271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.009296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.009543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.009568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.009838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.009865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.010081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.010108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.010335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.010362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.010612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.010646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.010899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.010927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.011133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.011160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.011407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.011433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.011737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.011764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.011986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.012016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.012225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.012253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.388 qpair failed and we were unable to recover it. 00:33:54.388 [2024-07-20 18:09:29.012472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.388 [2024-07-20 18:09:29.012498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.012718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.012745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.012976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.013003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.013265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.013291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.013496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.013522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.013737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.013764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.013996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.014023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.014269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.014295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.014509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.014535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.014767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.014812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.015026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.015054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.015303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.015329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.015621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.015648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.015932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.015959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.016159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.016185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.016472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.016499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.016875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.016902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.017116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.017143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.017389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.017415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.017663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.017689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.017952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.017979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.018201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.018239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.019434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.019464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.020362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.020404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.021300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.021342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.021631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.021659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.021936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.021964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.022181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.022208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.022449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.022477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.022729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.022755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.022980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.023006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.023254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.023280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.023520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.023546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.023835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.023861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.024138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.024173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.024382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.024408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.024622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.024648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.024863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.024890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.025111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.025137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.025350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.025378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.025621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.025648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.389 qpair failed and we were unable to recover it. 00:33:54.389 [2024-07-20 18:09:29.025868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.389 [2024-07-20 18:09:29.025895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.026113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.026140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.026371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.026397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.026618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.026644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.026887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.026914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.027130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.027158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.027502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.027528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.027818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.027846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.028055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.028082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.028308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.028334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.028603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.028629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.028878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.028906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.029122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.029153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.029361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.029387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.029606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.029632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.029900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.029927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.030155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.030183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.030422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.030448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.030679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.030705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.030953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.030979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.031242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.031268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.031548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.031574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.031823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.031849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.032068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.032095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.032385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.032426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.032680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.032706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.033014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.033042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.033359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.033387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.033633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.033660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.033906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.033933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.034164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.034190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.034424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.034450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.034667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.034694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.034936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.034963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.035180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.035206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.035435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.035461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.035686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.035728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.036022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.036050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.036268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.036294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.036507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.036548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.036803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.036830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.037042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.037069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.037333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.037360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.037627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.037653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.037921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.037948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.038166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.038193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.038433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.038459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.038676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.038702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.038939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.038966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.039179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.390 [2024-07-20 18:09:29.039206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.390 qpair failed and we were unable to recover it. 00:33:54.390 [2024-07-20 18:09:29.039468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.039495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.039711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.039737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.039972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.040003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.040212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.040239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.040501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.040527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.040763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.040790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.041181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.041208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.041425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.041453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.041661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.041688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.041907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.041935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.042308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.042334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.042544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.042570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.042771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.042802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.042910] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.391 [2024-07-20 18:09:29.042946] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.391 [2024-07-20 18:09:29.042962] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.391 [2024-07-20 18:09:29.042974] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.391 [2024-07-20 18:09:29.042986] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.391 [2024-07-20 18:09:29.043022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.043048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.043204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:54.391 [2024-07-20 18:09:29.043267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.043295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.043263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:54.391 [2024-07-20 18:09:29.043336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:54.391 [2024-07-20 18:09:29.043340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:54.391 [2024-07-20 18:09:29.043568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.043593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.043817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.043844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.044053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.044079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.044328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.044354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.044592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.044618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.044830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.044856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.045068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.045105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.045347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.045374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.045592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.045618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.045863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.045890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.046110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.046137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.046353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.046379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.046625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.046651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.046875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.046903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.047116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.047142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.047356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.047384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.047623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.047649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.047864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.047890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.048110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.048136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.048344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.048370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.048606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.048632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.048859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.048886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.049108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.049135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.049372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.049398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.049619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.049645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.049928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.049955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.050171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.050198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.050408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.050434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.050644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.050670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.050909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.050936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.051148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.051174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.051375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.051401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.051635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.051661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.051899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.391 [2024-07-20 18:09:29.051925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.391 qpair failed and we were unable to recover it. 00:33:54.391 [2024-07-20 18:09:29.052207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.052233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.052506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.052532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.052786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.052817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.053029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.053060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.053263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.053289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.053540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.053566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.053824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.053851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.054073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.054110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.054332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.054359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.054599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.054625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.054850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.054878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.055099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.055125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.055337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.055365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.055590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.055616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.055856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.055883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.056116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.056142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.056448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.056475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.056722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.056748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.056970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.056997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.057210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.057237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.057462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.057489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.057709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.057736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.057947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.057975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.058418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.058457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.058684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.058711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.058943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.058970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.059178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.059206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.059440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.059466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.059698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.059724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.059951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.059978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.060236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.060262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.060501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.060527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.060726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.060752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.060978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.061007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.061252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.061279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.061513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.061540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.061755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.061789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.062014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.062040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.062247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.062273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.062473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.062499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.062704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.062731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.062988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.063016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.063254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.063280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.063497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.063528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.063742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.063768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.392 qpair failed and we were unable to recover it. 00:33:54.392 [2024-07-20 18:09:29.063995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.392 [2024-07-20 18:09:29.064023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.064246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.064274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.064514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.064541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.064782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.064816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.065039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.065065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.065301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.065327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.065561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.065587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.065831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.065858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.066065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.066091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.066303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.066329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.066596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.066622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.066843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.066870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.067103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.067129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.067362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.067388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.067593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.067619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.067834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.067860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.068068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.068096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.068312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.068339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.068577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.068604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.068825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.068852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.069075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.069101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.069313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.069341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.069547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.069573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.069825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.069851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.070072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.070097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.070338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.070364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.070585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.070611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.070857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.070884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.071102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.071128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.071336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.071361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.071603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.071629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.071845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.071871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.072094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.072120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.072538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.072563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.072821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.072848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.073059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.073084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.073371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.073397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.073630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.073655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.073897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.073928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.074170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.074196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.074440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.074467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.074682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.074707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.074948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.074976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.075361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.075387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.075642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.075668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.075912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.075938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.076160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.076186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.076428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.076454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.076687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.076713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.076914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.076940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.393 [2024-07-20 18:09:29.077146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.393 [2024-07-20 18:09:29.077172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.393 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.077402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.077428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.077645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.077671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.077931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.077957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.078213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.078240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.078477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.078503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.078718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.078746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.079018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.079045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.079270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.079296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.079504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.079530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.079743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.079769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.079976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.080002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.080214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.080240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.080479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.080505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.080715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.080741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.080996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.081023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.081228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.081254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.081460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.081486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.081694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.081720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.081961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.081988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.082204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.082230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.082475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.082503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.082721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.082748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.083013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.083040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.083278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.083303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.083577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.083603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.083862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.083889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.084129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.084156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.084390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.084421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.084703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.084728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.084947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.084973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.085201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.085227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.085434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.085460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.085718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.085746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.085963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.085991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.086438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.086478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.086710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.086736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.086983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.087010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.087258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.087283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.087519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.087545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.087763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.087789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.088011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.088037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.088251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.088278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.088515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.088541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.088783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.088816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.089024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.089051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.089287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.089312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.089524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.089550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.089785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.089816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.090061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.090088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.394 [2024-07-20 18:09:29.090299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.394 [2024-07-20 18:09:29.090325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.394 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.090540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.090566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.090837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.090863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.091085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.091111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.091351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.091377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.091595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.091621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.091863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.091890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.092130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.092156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.092369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.092396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.092641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.092669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.092916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.092942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.093154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.093180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.093390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.093415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.093657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.093684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.093919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.093946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.094187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.094213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.094455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.094481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.094721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.094748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.095005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.095036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.095267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.095293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.095557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.095583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.095785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.095819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.096029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.096055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.096284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.096310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.096531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.096558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.096777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.096811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.097051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.097077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.097315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.097341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.097548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.097575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.097818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.097845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.098067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.098093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.098328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.098354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.098598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.098625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.098837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.098864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.099097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.099123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.099367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.099393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.099637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.099663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.099939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.099965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.100201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.100228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.100452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.100478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.100683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.100710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.100921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.100948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.101193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.101221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.101438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.101464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.101673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.101700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.101929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.101957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.102192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.102219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.102487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.102512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.102710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.102737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.102953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.102980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.103366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.103406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.103617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.103643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.395 [2024-07-20 18:09:29.103869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.395 [2024-07-20 18:09:29.103896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.395 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.104117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.104143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.104379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.104404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.104637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.104664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.104917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.104943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.105157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.105182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.105413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.105439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.105675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.105701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.105919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.105945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.106187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.106215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.106454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.106480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.106694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.106720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.106962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.106988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.107228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.107254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.107523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.107549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.107788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.107832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.108036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.108062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.108324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.108349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.108615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.108641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.109049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.109077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.109356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.109383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.109626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.109652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.109866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.109892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.110104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.110130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.110368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.110393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.110651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.110677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.110907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.110934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.111181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.111207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.111620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.111645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.111871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.111897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.112148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.112174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.112394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.112421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.112620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.112645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.112860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.112891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.113105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.113131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.113374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.113401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.113603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.113630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.114025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.114051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.114297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.114324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.396 [2024-07-20 18:09:29.114560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.396 [2024-07-20 18:09:29.114587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.396 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.114828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.114855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.115063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.115089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.115302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.115327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.115541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.115566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.115820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.115846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.116074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.116102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.116315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.116341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.116555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.116582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.116822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.116848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.117087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.117112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.117324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.117352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.117556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.117582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.117786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.117820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.118064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.118090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.118298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.118324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.118562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.118588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.118830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.118858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.119103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.119130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.119365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.119392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.119596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.119622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.119845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.119873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.120086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.120113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.120308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.120334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.120565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.120590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.120811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.120837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.121084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.121109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.121315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.121341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.121574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.121600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.121815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.121843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.122048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.122074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.122319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.122345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.122575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.122601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.122849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.122875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.123105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.123141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.123356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.123383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.123650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.123676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.123899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.123926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.124133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.124160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.124379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.124404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.124644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.124670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.124886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.124912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.125120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.125146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.125344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.125370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.125608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.125634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.125843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.125869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.126081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.126107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.126316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.126341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.397 qpair failed and we were unable to recover it. 00:33:54.397 [2024-07-20 18:09:29.126570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.397 [2024-07-20 18:09:29.126595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.126804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.126831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.127069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.127096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.127363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.127389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.127822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.127865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.128109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.128135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.128354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.128380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.128639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.128664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.128935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.128961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.129167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.129193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.129432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.129459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.129683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.129709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.129963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.129990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.130239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.130265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.130468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.130494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.130738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.130764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.130990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.131015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.131253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.131279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.131548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.131574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.131784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.131816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.132048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.132074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.132285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.132310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.132510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.132536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.132804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.132830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.133047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.133081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.133312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.133338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.133583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.133614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.133818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.133844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.134078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.134103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.134349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.134375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.134594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.134620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.134836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.134864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.135077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.135104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.135310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.135337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.135604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.135630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.135865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.135891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.136127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.136153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.136366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.136392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.136591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.136616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.136852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.136879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.137084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.137110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.137348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.137374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.137610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.137635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.137872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.137898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.138130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.138155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.138386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.138412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.138618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.138645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.138884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.138910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.398 [2024-07-20 18:09:29.139124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.398 [2024-07-20 18:09:29.139149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.398 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.139344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.139370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.139602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.139628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.139868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.139894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.140133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.140159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.140369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.140395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.140606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.140632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.140900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.140926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.141161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.141187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.141420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.141446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.141686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.141712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.141934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.141960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.142201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.142229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.142467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.142493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.142730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.142758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.143008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.143034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.143268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.143294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.143514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.143540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.143809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.143840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.144049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.144075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.144301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.144326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.144566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.144592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.144836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.144862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.145072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.145098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.145353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.145379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.145621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.145647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.145887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.145913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.146174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.146200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.146438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.146463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.146675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.146703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.146943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.146969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.147179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.147205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.147418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.147445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.147646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.147672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.147909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.147937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.148182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.148208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.148447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.148472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.148683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.148709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.148927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.148955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.149231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.149257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.149492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.149518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.149718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.149743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.149987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.150013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.150254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.150280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.150548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.150573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.150820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.150846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.151050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.151076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.151279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.151305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.151512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.151538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.151742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.151768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.399 [2024-07-20 18:09:29.152003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.399 [2024-07-20 18:09:29.152029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.399 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.152270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.152296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.152502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.152527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.152738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.152764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.153110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.153139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.153343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.153369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.153610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.153637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.153882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.153908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.154156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.154188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.154403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.154431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.154668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.154695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.154900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.154927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.155326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.155357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.155606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.155633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.155850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.155877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.156098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.156126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.156342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.156368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.156573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.156600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.156820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.156847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.157065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.157250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.400 [2024-07-20 18:09:29.157480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.400 [2024-07-20 18:09:29.157507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.400 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.157718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.157745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.157979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.158007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.158217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.158244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.158509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.158599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.158899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.158928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.159215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.159243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.159459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.159485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.159813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.159839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.160074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.160102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.160322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.160407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.160643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.160730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.161038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.161066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.161304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.161331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.161588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.161616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.161862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.161889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.162128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.162154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.162365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.162391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.162630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.162655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.162901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.162928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.163131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.163157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.163360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.163386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.163597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.163622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.163828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.163854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.164083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.164109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.164320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.164346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.164608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.164634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.164864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.164889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.165124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.165154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.165397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.165424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.165693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.165719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.165931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.165958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.166162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.166188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.166431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.166457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.166680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.166707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.166915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.166941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.167150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.167177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.167420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.167446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.167654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.167679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.167903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.167930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.168175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.168201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.168437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.168463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.168672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.168698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.168908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.168935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.169180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.666 [2024-07-20 18:09:29.169206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.666 qpair failed and we were unable to recover it. 00:33:54.666 [2024-07-20 18:09:29.169413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.169439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.170058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.170085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.170294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.170319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.170561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.170588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.170824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.170850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.171054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.171080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.171334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.171361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.171626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.171653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:54.667 [2024-07-20 18:09:29.171863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.171890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:33:54.667 [2024-07-20 18:09:29.172131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.172166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:54.667 [2024-07-20 18:09:29.172380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.172407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:54.667 [2024-07-20 18:09:29.172622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.172650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.667 [2024-07-20 18:09:29.172914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.172942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.173178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.173204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.173447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.173473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.173681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.173709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.173925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.173952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.174201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.174227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.174449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.174475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.174727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.174753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.174998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.175025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.175267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.175300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.175515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.175542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.175760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.175787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.176013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.176040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.176257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.176289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.176499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.176525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.176768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.176801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.177030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.177056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.177259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.177285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.177489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.177514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.177752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.177779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.178034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.178060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.178309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.178335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.178572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.178598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.178809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.178839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.179074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.179106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.179322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.179348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.179549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.179576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.179789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.179819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.180064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.180090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.180327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.180353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.180567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.180593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.180842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.180869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.181078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.181103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.181314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.181342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.181555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.181582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.181829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.181855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.182101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.182127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.182369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.667 [2024-07-20 18:09:29.182396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.667 qpair failed and we were unable to recover it. 00:33:54.667 [2024-07-20 18:09:29.182639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.182666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.182904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.182932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.183179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.183204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.183410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.183436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.183640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.183668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.183879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.183906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.184130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.184157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.184372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.184399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.184628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.184655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.184894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.184921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.185161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.185187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.185420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.185451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.185696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.185723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.185936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.185963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.186205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.186231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.186443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.186470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.186684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.186712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.186951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.186979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.187245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.187271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.187510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.187536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.187776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.187815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.188056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.188082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.188296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.188322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.188530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.188562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.188779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.188811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.189035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.189062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.189295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.189321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.189567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.189594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.189848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.189876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.190079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.190105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.190345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.190370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.190608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.190635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.190910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.190937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.191161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.191187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.191419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.191445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.191691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.191716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.191935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.191962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.192180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.192206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.192412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.192438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.192673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.192699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.192920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.192946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.193155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.193181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.193416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.193443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.193642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.193667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.193876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.193904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.194117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.194144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.194352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.194377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.194589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.194615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.194889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.194916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.195190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.195216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.195460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.195486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.195725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.195756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.195989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.196016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 qpair failed and we were unable to recover it. 00:33:54.668 [2024-07-20 18:09:29.196221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.668 [2024-07-20 18:09:29.196248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.668 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.196472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.196497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:54.669 [2024-07-20 18:09:29.196708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.196736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.669 [2024-07-20 18:09:29.196999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.669 [2024-07-20 18:09:29.197085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.197366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.197392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.197674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.197701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.197935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.197964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.198214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.198240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.198496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.198522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.198744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.198772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.199037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.199065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.199280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.199306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.199516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.199541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.199781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.199823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.200032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.200057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.200292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.200319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.200555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.200580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.200815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.200841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.201078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.201104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.201305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.201331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.201532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.201558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.201765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.201797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.202041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.202066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.202284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.202315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.202523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.202549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.202754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.202779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.202992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.203017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.203269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.203295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.203764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.203810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.204057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.204083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.204327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.204353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.204595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.204621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.204866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.204893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.205111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.205137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.205343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.205370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.205610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.205635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.206031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.206059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.206312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.206338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.206582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.206607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.206828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.206855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.207070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.207097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.207556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.207596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.207816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.207842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.208329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.208370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.208622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.208650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.208888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.669 [2024-07-20 18:09:29.208916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.669 qpair failed and we were unable to recover it. 00:33:54.669 [2024-07-20 18:09:29.209163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.209192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.209446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.209472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.209683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.209708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.209955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.209984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.210229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.210255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.210649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.210675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.210924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.210951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.211197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.211225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.211466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.211492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.211701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.211727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.212236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.212291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.212549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.212577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.212784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.212820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.213043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.213069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.213314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.213341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.213587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.213613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.213842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.213869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.214087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.214124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.214337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.214365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.214628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.214654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.214877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.214906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.215156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.215183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.215395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.215421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.215663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.215689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.215931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.215957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.216199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.216225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.216437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.216463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.216667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.216694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.216946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.216972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.217214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.217241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.217478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.217504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.217753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.217780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.218012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.218038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.218247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.218274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.218493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.218519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.218754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.218780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.219016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.219043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.219277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.219303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.219523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.219548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.219754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.219780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.220004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.220030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.220274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.220300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.220519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.220545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.220761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.220789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.221046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.221073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.221312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.221338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.221579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.221606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.221825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.221852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.222098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.222123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.222384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.222410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.222653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.222679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.222924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.222950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.223196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.223222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.223507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.223533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.223740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.223766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.223986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.224017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 [2024-07-20 18:09:29.224263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.224289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.670 qpair failed and we were unable to recover it. 00:33:54.670 Malloc0 00:33:54.670 [2024-07-20 18:09:29.224509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.670 [2024-07-20 18:09:29.224539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.224743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.224769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.671 [2024-07-20 18:09:29.224988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:54.671 [2024-07-20 18:09:29.225014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.671 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.671 [2024-07-20 18:09:29.225293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.225318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.225571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.225597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.225854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.225882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.226117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.226144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.226376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.226402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.226657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.226684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.226897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.226923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.227142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.227168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.227436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.227462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.227709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.227736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.227961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.227987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.228048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.671 [2024-07-20 18:09:29.228247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.228274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.228516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.228542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.228772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.228816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.229043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.229070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.229284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.229309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.229519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.229545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.229782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.229817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.230037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.230063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.230309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.230335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.230547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.230574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.230803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.230830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.231052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.231079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.231349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.231375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.231612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.231638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.231931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.231957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.232202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.232228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.232485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.232511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.232746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.232772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.233001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.233027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.233241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.233267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.233485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.233510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.233761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.233787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.234044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.234070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.234330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.234356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.234571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.234601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.234836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.234862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.235065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.235091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.235321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.235347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.235557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.235584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.235816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.235853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.236106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.236132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.671 [2024-07-20 18:09:29.236373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.236399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b9 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:54.671 0 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.671 [2024-07-20 18:09:29.236611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.671 [2024-07-20 18:09:29.236637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.236843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.236870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.237080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.237106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.237373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.237400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.671 [2024-07-20 18:09:29.237656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.671 [2024-07-20 18:09:29.237683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.671 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.237906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.237932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.238168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.238194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.238468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.238493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.238730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.238756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.238984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.239011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.239267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.239293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.239534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.239562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.239764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.239790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.240045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.240070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.240293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.240319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.240526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.240552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.240805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.240832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.241037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.241068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.241288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.241315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.241557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.241583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.241797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.241823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.242029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.242056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.242389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.242415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.242647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.242675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.242923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.242949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.243159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.243185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.243390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.243416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.243737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.243763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.243982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.244008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.244217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.244242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.672 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:54.672 [2024-07-20 18:09:29.244457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.244487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.672 [2024-07-20 18:09:29.244757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.672 [2024-07-20 18:09:29.244783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.245038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.245064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.245275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.245301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.245506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.245533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.245752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.245778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.246012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.246039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.246273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.246299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.246520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.246546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.246867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.246894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.247112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.247139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.247357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.247383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.247620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.247651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.247893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.247920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.248155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.248181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.248437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.248463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.248699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.248725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.249002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.249028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.249240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.249266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.249474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.249500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.249734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.249760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.250012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.672 [2024-07-20 18:09:29.250038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.672 qpair failed and we were unable to recover it. 00:33:54.672 [2024-07-20 18:09:29.250277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.250304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.250550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.250576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.250814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.250841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.251055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.251081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.251330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.251356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.251560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.251586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.251848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.251874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.252079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.252104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.673 [2024-07-20 18:09:29.252341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.252367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.673 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.673 [2024-07-20 18:09:29.252572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.252598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.673 [2024-07-20 18:09:29.252814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.252841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.253071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.253097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.253325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.253351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.253590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.253616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.253932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.253958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.254165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.254196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.254464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.254490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.254693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.254719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.254986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.255012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.255256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.255282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.255542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.255568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.255810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.255836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.256100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.673 [2024-07-20 18:09:29.256126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5844000b90 with addr=10.0.0.2, port=4420 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.256323] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.673 [2024-07-20 18:09:29.258982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.673 [2024-07-20 18:09:29.259220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.673 [2024-07-20 18:09:29.259248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.673 [2024-07-20 18:09:29.259264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.673 [2024-07-20 18:09:29.259277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.673 [2024-07-20 18:09:29.259313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.673 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:54.673 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.673 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.673 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.673 18:09:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1103661 00:33:54.673 [2024-07-20 18:09:29.268783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.673 [2024-07-20 18:09:29.269003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.673 [2024-07-20 18:09:29.269030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.673 [2024-07-20 18:09:29.269045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.673 [2024-07-20 18:09:29.269058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.673 [2024-07-20 18:09:29.269089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.278819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.673 [2024-07-20 18:09:29.279045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.673 [2024-07-20 18:09:29.279071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.673 [2024-07-20 18:09:29.279085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.673 [2024-07-20 18:09:29.279099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.673 [2024-07-20 18:09:29.279129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.288845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.673 [2024-07-20 18:09:29.289069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.673 [2024-07-20 18:09:29.289095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.673 [2024-07-20 18:09:29.289110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.673 [2024-07-20 18:09:29.289122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.673 [2024-07-20 18:09:29.289166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.298816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.673 [2024-07-20 18:09:29.299031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.673 [2024-07-20 18:09:29.299057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.673 [2024-07-20 18:09:29.299071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.673 [2024-07-20 18:09:29.299084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.673 [2024-07-20 18:09:29.299117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.308842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.673 [2024-07-20 18:09:29.309096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.673 [2024-07-20 18:09:29.309127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.673 [2024-07-20 18:09:29.309142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.673 [2024-07-20 18:09:29.309155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.673 [2024-07-20 18:09:29.309185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.318825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.673 [2024-07-20 18:09:29.319035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.673 [2024-07-20 18:09:29.319061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.673 [2024-07-20 18:09:29.319075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.673 [2024-07-20 18:09:29.319088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.673 [2024-07-20 18:09:29.319119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.328873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.673 [2024-07-20 18:09:29.329094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.673 [2024-07-20 18:09:29.329123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.673 [2024-07-20 18:09:29.329139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.673 [2024-07-20 18:09:29.329152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.673 [2024-07-20 18:09:29.329184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.338850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.673 [2024-07-20 18:09:29.339108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.673 [2024-07-20 18:09:29.339135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.673 [2024-07-20 18:09:29.339149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.673 [2024-07-20 18:09:29.339163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.673 [2024-07-20 18:09:29.339193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.348871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.673 [2024-07-20 18:09:29.349091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.673 [2024-07-20 18:09:29.349118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.673 [2024-07-20 18:09:29.349132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.673 [2024-07-20 18:09:29.349145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.673 [2024-07-20 18:09:29.349181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.358921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.673 [2024-07-20 18:09:29.359126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.673 [2024-07-20 18:09:29.359153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.673 [2024-07-20 18:09:29.359168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.673 [2024-07-20 18:09:29.359181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.673 [2024-07-20 18:09:29.359210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.673 qpair failed and we were unable to recover it. 00:33:54.673 [2024-07-20 18:09:29.368973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.673 [2024-07-20 18:09:29.369227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.673 [2024-07-20 18:09:29.369252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.673 [2024-07-20 18:09:29.369267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.674 [2024-07-20 18:09:29.369280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.674 [2024-07-20 18:09:29.369310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.674 qpair failed and we were unable to recover it. 00:33:54.674 [2024-07-20 18:09:29.379000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.674 [2024-07-20 18:09:29.379219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.674 [2024-07-20 18:09:29.379245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.674 [2024-07-20 18:09:29.379259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.674 [2024-07-20 18:09:29.379272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.674 [2024-07-20 18:09:29.379302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.674 qpair failed and we were unable to recover it. 00:33:54.674 [2024-07-20 18:09:29.389086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.674 [2024-07-20 18:09:29.389321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.674 [2024-07-20 18:09:29.389350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.674 [2024-07-20 18:09:29.389369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.674 [2024-07-20 18:09:29.389383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.674 [2024-07-20 18:09:29.389415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.674 qpair failed and we were unable to recover it. 00:33:54.674 [2024-07-20 18:09:29.399068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.674 [2024-07-20 18:09:29.399284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.674 [2024-07-20 18:09:29.399317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.674 [2024-07-20 18:09:29.399332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.674 [2024-07-20 18:09:29.399345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.674 [2024-07-20 18:09:29.399376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.674 qpair failed and we were unable to recover it. 00:33:54.674 [2024-07-20 18:09:29.409056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.674 [2024-07-20 18:09:29.409273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.674 [2024-07-20 18:09:29.409299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.674 [2024-07-20 18:09:29.409313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.674 [2024-07-20 18:09:29.409326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.674 [2024-07-20 18:09:29.409356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.674 qpair failed and we were unable to recover it. 00:33:54.674 [2024-07-20 18:09:29.419120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.674 [2024-07-20 18:09:29.419340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.674 [2024-07-20 18:09:29.419366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.674 [2024-07-20 18:09:29.419381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.674 [2024-07-20 18:09:29.419394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.674 [2024-07-20 18:09:29.419424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.674 qpair failed and we were unable to recover it. 00:33:54.674 [2024-07-20 18:09:29.429133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.674 [2024-07-20 18:09:29.429342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.674 [2024-07-20 18:09:29.429369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.674 [2024-07-20 18:09:29.429383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.674 [2024-07-20 18:09:29.429396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.674 [2024-07-20 18:09:29.429426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.674 qpair failed and we were unable to recover it. 00:33:54.674 [2024-07-20 18:09:29.439191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.674 [2024-07-20 18:09:29.439408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.674 [2024-07-20 18:09:29.439434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.674 [2024-07-20 18:09:29.439448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.674 [2024-07-20 18:09:29.439466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.674 [2024-07-20 18:09:29.439497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.674 qpair failed and we were unable to recover it. 00:33:54.674 [2024-07-20 18:09:29.449185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.674 [2024-07-20 18:09:29.449406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.674 [2024-07-20 18:09:29.449432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.674 [2024-07-20 18:09:29.449446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.674 [2024-07-20 18:09:29.449459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.674 [2024-07-20 18:09:29.449491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.674 qpair failed and we were unable to recover it. 00:33:54.932 [2024-07-20 18:09:29.459210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.932 [2024-07-20 18:09:29.459418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.932 [2024-07-20 18:09:29.459445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.932 [2024-07-20 18:09:29.459459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.932 [2024-07-20 18:09:29.459472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.932 [2024-07-20 18:09:29.459503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.932 qpair failed and we were unable to recover it. 00:33:54.932 [2024-07-20 18:09:29.469251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.932 [2024-07-20 18:09:29.469491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.932 [2024-07-20 18:09:29.469517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.932 [2024-07-20 18:09:29.469531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.932 [2024-07-20 18:09:29.469544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.932 [2024-07-20 18:09:29.469574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.932 qpair failed and we were unable to recover it. 00:33:54.932 [2024-07-20 18:09:29.479275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.932 [2024-07-20 18:09:29.479487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.932 [2024-07-20 18:09:29.479513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.932 [2024-07-20 18:09:29.479527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.932 [2024-07-20 18:09:29.479541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.932 [2024-07-20 18:09:29.479572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.932 qpair failed and we were unable to recover it. 00:33:54.932 [2024-07-20 18:09:29.489290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.932 [2024-07-20 18:09:29.489588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.932 [2024-07-20 18:09:29.489614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.932 [2024-07-20 18:09:29.489628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.932 [2024-07-20 18:09:29.489641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.932 [2024-07-20 18:09:29.489670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.932 qpair failed and we were unable to recover it. 00:33:54.932 [2024-07-20 18:09:29.499385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.932 [2024-07-20 18:09:29.499625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.932 [2024-07-20 18:09:29.499652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.932 [2024-07-20 18:09:29.499666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.932 [2024-07-20 18:09:29.499679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.932 [2024-07-20 18:09:29.499709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.932 qpair failed and we were unable to recover it. 00:33:54.932 [2024-07-20 18:09:29.509437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.932 [2024-07-20 18:09:29.509651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.932 [2024-07-20 18:09:29.509678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.932 [2024-07-20 18:09:29.509701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.932 [2024-07-20 18:09:29.509716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.932 [2024-07-20 18:09:29.509748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.932 qpair failed and we were unable to recover it. 00:33:54.932 [2024-07-20 18:09:29.519375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.932 [2024-07-20 18:09:29.519589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.932 [2024-07-20 18:09:29.519616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.932 [2024-07-20 18:09:29.519630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.932 [2024-07-20 18:09:29.519643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.932 [2024-07-20 18:09:29.519674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.932 qpair failed and we were unable to recover it. 00:33:54.932 [2024-07-20 18:09:29.529441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.529652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.529678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.529693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.529711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.529743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.539426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.539637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.539663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.539677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.539691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.539720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.549504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.549712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.549738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.549752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.549765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.549803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.559487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.559690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.559716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.559731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.559744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.559774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.569494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.569713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.569738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.569753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.569766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.569803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.579534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.579748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.579773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.579788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.579808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.579840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.589588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.589811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.589837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.589852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.589865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.589895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.599572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.599785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.599818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.599833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.599846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.599876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.609647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.609881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.609907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.609922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.609936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.609968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.619641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.619858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.619884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.619905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.619919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.619950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.629652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.629867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.629893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.629908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.629921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.629952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.639783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.640019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.640045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.640059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.640072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.640102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.649757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.650023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.650052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.650067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.650081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.650114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.659782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.660000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.933 [2024-07-20 18:09:29.660027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.933 [2024-07-20 18:09:29.660042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.933 [2024-07-20 18:09:29.660055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.933 [2024-07-20 18:09:29.660085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.933 qpair failed and we were unable to recover it. 00:33:54.933 [2024-07-20 18:09:29.669801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.933 [2024-07-20 18:09:29.670022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.934 [2024-07-20 18:09:29.670057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.934 [2024-07-20 18:09:29.670072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.934 [2024-07-20 18:09:29.670085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.934 [2024-07-20 18:09:29.670118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.934 qpair failed and we were unable to recover it. 00:33:54.934 [2024-07-20 18:09:29.679811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.934 [2024-07-20 18:09:29.680043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.934 [2024-07-20 18:09:29.680069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.934 [2024-07-20 18:09:29.680084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.934 [2024-07-20 18:09:29.680098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.934 [2024-07-20 18:09:29.680130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.934 qpair failed and we were unable to recover it. 00:33:54.934 [2024-07-20 18:09:29.689872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.934 [2024-07-20 18:09:29.690083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.934 [2024-07-20 18:09:29.690109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.934 [2024-07-20 18:09:29.690123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.934 [2024-07-20 18:09:29.690137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.934 [2024-07-20 18:09:29.690167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.934 qpair failed and we were unable to recover it. 00:33:54.934 [2024-07-20 18:09:29.699896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.934 [2024-07-20 18:09:29.700107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.934 [2024-07-20 18:09:29.700133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.934 [2024-07-20 18:09:29.700148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.934 [2024-07-20 18:09:29.700161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.934 [2024-07-20 18:09:29.700193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.934 qpair failed and we were unable to recover it. 00:33:54.934 [2024-07-20 18:09:29.709926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.934 [2024-07-20 18:09:29.710133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.934 [2024-07-20 18:09:29.710165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.934 [2024-07-20 18:09:29.710181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.934 [2024-07-20 18:09:29.710194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.934 [2024-07-20 18:09:29.710226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.934 qpair failed and we were unable to recover it. 00:33:54.934 [2024-07-20 18:09:29.719923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.934 [2024-07-20 18:09:29.720126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.934 [2024-07-20 18:09:29.720151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.934 [2024-07-20 18:09:29.720166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.934 [2024-07-20 18:09:29.720179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:54.934 [2024-07-20 18:09:29.720211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:54.934 qpair failed and we were unable to recover it. 00:33:55.191 [2024-07-20 18:09:29.729979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.191 [2024-07-20 18:09:29.730196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.191 [2024-07-20 18:09:29.730222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.191 [2024-07-20 18:09:29.730237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.191 [2024-07-20 18:09:29.730250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.191 [2024-07-20 18:09:29.730286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.191 qpair failed and we were unable to recover it. 00:33:55.191 [2024-07-20 18:09:29.739990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.191 [2024-07-20 18:09:29.740198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.191 [2024-07-20 18:09:29.740225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.191 [2024-07-20 18:09:29.740239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.191 [2024-07-20 18:09:29.740251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.191 [2024-07-20 18:09:29.740282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.191 qpair failed and we were unable to recover it. 00:33:55.191 [2024-07-20 18:09:29.750043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.191 [2024-07-20 18:09:29.750252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.191 [2024-07-20 18:09:29.750278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.191 [2024-07-20 18:09:29.750292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.191 [2024-07-20 18:09:29.750305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.191 [2024-07-20 18:09:29.750342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.191 qpair failed and we were unable to recover it. 00:33:55.191 [2024-07-20 18:09:29.760040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.191 [2024-07-20 18:09:29.760249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.191 [2024-07-20 18:09:29.760275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.191 [2024-07-20 18:09:29.760289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.191 [2024-07-20 18:09:29.760303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.191 [2024-07-20 18:09:29.760335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.191 qpair failed and we were unable to recover it. 00:33:55.191 [2024-07-20 18:09:29.770093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.191 [2024-07-20 18:09:29.770311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.191 [2024-07-20 18:09:29.770338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.191 [2024-07-20 18:09:29.770353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.191 [2024-07-20 18:09:29.770366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.191 [2024-07-20 18:09:29.770397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.191 qpair failed and we were unable to recover it. 00:33:55.191 [2024-07-20 18:09:29.780184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.780457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.780484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.780499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.780516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.780550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.790136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.790349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.790375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.790389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.790403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.790433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.800200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.800497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.800529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.800544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.800557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.800587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.810231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.810485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.810510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.810525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.810538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.810567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.820229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.820447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.820473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.820488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.820501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.820530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.830237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.830444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.830469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.830484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.830497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.830526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.840278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.840538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.840564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.840578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.840598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.840629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.850302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.850532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.850558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.850573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.850585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.850616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.860319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.860530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.860556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.860571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.860584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.860614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.870395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.870604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.870630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.870644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.870658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.870688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.880402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.880606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.880630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.880644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.880656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.880686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.890485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.890718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.890744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.890759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.890772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.890811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.900464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.900684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.900710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.900724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.900740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.900772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.910522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.910754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.910780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.910801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.910816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.910849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.920517] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.920729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.920756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.920770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.920783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.920822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.930561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.930772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.930807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.930824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.930844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.930875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.940643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.940867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.940895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.940915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.940928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.940961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.950606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.950821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.950847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.950861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.950874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.950904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.960636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.960851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.960877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.960892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.960905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.960935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.970649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.970868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.970895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.970909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.970922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.970955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.192 [2024-07-20 18:09:29.980670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.192 [2024-07-20 18:09:29.980888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.192 [2024-07-20 18:09:29.980915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.192 [2024-07-20 18:09:29.980930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.192 [2024-07-20 18:09:29.980943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.192 [2024-07-20 18:09:29.980973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.192 qpair failed and we were unable to recover it. 00:33:55.452 [2024-07-20 18:09:29.990692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.452 [2024-07-20 18:09:29.990964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.452 [2024-07-20 18:09:29.990991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.452 [2024-07-20 18:09:29.991005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:29.991018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:29.991051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.000726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.000944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.000970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.000985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.000999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.001030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.010933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.011183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.011215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.011231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.011244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.011278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.020815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.021034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.021061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.021083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.021097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.021128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.030846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.031054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.031081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.031096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.031110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.031140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.040884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.041112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.041138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.041152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.041166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.041198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.050924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.051163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.051189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.051203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.051217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.051263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.061066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.061298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.061324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.061339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.061353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.061384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.070996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.071210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.071236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.071251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.071264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.071309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.081011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.081224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.081250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.081265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.081279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.081309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.091054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.091269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.091295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.091310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.091323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.091354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.101023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.101239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.101266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.101281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.101294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.101337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.111066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.111374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.111408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.111441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.111454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.111484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.121070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.121281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.121308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.121322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.121335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.121365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.453 qpair failed and we were unable to recover it. 00:33:55.453 [2024-07-20 18:09:30.131128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.453 [2024-07-20 18:09:30.131345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.453 [2024-07-20 18:09:30.131371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.453 [2024-07-20 18:09:30.131385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.453 [2024-07-20 18:09:30.131398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.453 [2024-07-20 18:09:30.131441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.454 qpair failed and we were unable to recover it. 00:33:55.454 [2024-07-20 18:09:30.141147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.454 [2024-07-20 18:09:30.141367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.454 [2024-07-20 18:09:30.141393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.454 [2024-07-20 18:09:30.141407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.454 [2024-07-20 18:09:30.141420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.454 [2024-07-20 18:09:30.141451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.454 qpair failed and we were unable to recover it. 00:33:55.454 [2024-07-20 18:09:30.151213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.454 [2024-07-20 18:09:30.151427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.454 [2024-07-20 18:09:30.151454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.454 [2024-07-20 18:09:30.151468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.454 [2024-07-20 18:09:30.151481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.454 [2024-07-20 18:09:30.151531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.454 qpair failed and we were unable to recover it. 00:33:55.454 [2024-07-20 18:09:30.161178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.454 [2024-07-20 18:09:30.161383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.454 [2024-07-20 18:09:30.161409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.454 [2024-07-20 18:09:30.161424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.454 [2024-07-20 18:09:30.161437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.454 [2024-07-20 18:09:30.161467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.454 qpair failed and we were unable to recover it. 00:33:55.454 [2024-07-20 18:09:30.171233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.454 [2024-07-20 18:09:30.171448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.454 [2024-07-20 18:09:30.171473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.454 [2024-07-20 18:09:30.171487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.454 [2024-07-20 18:09:30.171500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.454 [2024-07-20 18:09:30.171530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.454 qpair failed and we were unable to recover it. 00:33:55.454 [2024-07-20 18:09:30.181224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.454 [2024-07-20 18:09:30.181429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.454 [2024-07-20 18:09:30.181455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.454 [2024-07-20 18:09:30.181470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.454 [2024-07-20 18:09:30.181483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.454 [2024-07-20 18:09:30.181512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.454 qpair failed and we were unable to recover it. 00:33:55.454 [2024-07-20 18:09:30.191253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.454 [2024-07-20 18:09:30.191452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.454 [2024-07-20 18:09:30.191478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.454 [2024-07-20 18:09:30.191492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.454 [2024-07-20 18:09:30.191505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.454 [2024-07-20 18:09:30.191535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.454 qpair failed and we were unable to recover it. 00:33:55.454 [2024-07-20 18:09:30.201288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.454 [2024-07-20 18:09:30.201499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.454 [2024-07-20 18:09:30.201530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.454 [2024-07-20 18:09:30.201545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.454 [2024-07-20 18:09:30.201559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.454 [2024-07-20 18:09:30.201589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.454 qpair failed and we were unable to recover it. 00:33:55.454 [2024-07-20 18:09:30.211333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.454 [2024-07-20 18:09:30.211545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.454 [2024-07-20 18:09:30.211570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.454 [2024-07-20 18:09:30.211584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.454 [2024-07-20 18:09:30.211597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.454 [2024-07-20 18:09:30.211627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.454 qpair failed and we were unable to recover it. 00:33:55.454 [2024-07-20 18:09:30.221419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.454 [2024-07-20 18:09:30.221628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.454 [2024-07-20 18:09:30.221654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.454 [2024-07-20 18:09:30.221668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.454 [2024-07-20 18:09:30.221681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.454 [2024-07-20 18:09:30.221711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.454 qpair failed and we were unable to recover it. 00:33:55.454 [2024-07-20 18:09:30.231411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.454 [2024-07-20 18:09:30.231627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.454 [2024-07-20 18:09:30.231652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.454 [2024-07-20 18:09:30.231667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.454 [2024-07-20 18:09:30.231680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.454 [2024-07-20 18:09:30.231710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.454 qpair failed and we were unable to recover it. 00:33:55.454 [2024-07-20 18:09:30.241406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.454 [2024-07-20 18:09:30.241620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.454 [2024-07-20 18:09:30.241708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.454 [2024-07-20 18:09:30.241728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.454 [2024-07-20 18:09:30.241741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.454 [2024-07-20 18:09:30.241778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.454 qpair failed and we were unable to recover it. 00:33:55.712 [2024-07-20 18:09:30.251456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.712 [2024-07-20 18:09:30.251690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.712 [2024-07-20 18:09:30.251773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.712 [2024-07-20 18:09:30.251789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.712 [2024-07-20 18:09:30.251814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.712 [2024-07-20 18:09:30.252008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.712 qpair failed and we were unable to recover it. 00:33:55.712 [2024-07-20 18:09:30.261468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.712 [2024-07-20 18:09:30.261735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.712 [2024-07-20 18:09:30.261761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.712 [2024-07-20 18:09:30.261776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.712 [2024-07-20 18:09:30.261788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.712 [2024-07-20 18:09:30.261828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.712 qpair failed and we were unable to recover it. 00:33:55.712 [2024-07-20 18:09:30.271528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.712 [2024-07-20 18:09:30.271739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.712 [2024-07-20 18:09:30.271765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.712 [2024-07-20 18:09:30.271778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.712 [2024-07-20 18:09:30.271799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.712 [2024-07-20 18:09:30.271850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.712 qpair failed and we were unable to recover it. 00:33:55.712 [2024-07-20 18:09:30.281523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.712 [2024-07-20 18:09:30.281730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.712 [2024-07-20 18:09:30.281756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.712 [2024-07-20 18:09:30.281770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.712 [2024-07-20 18:09:30.281784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.712 [2024-07-20 18:09:30.281822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.712 qpair failed and we were unable to recover it. 00:33:55.712 [2024-07-20 18:09:30.291571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.712 [2024-07-20 18:09:30.291785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.712 [2024-07-20 18:09:30.291818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.712 [2024-07-20 18:09:30.291833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.712 [2024-07-20 18:09:30.291846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.712 [2024-07-20 18:09:30.291876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.712 qpair failed and we were unable to recover it. 00:33:55.712 [2024-07-20 18:09:30.301579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.712 [2024-07-20 18:09:30.301789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.712 [2024-07-20 18:09:30.301823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.712 [2024-07-20 18:09:30.301838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.712 [2024-07-20 18:09:30.301851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.712 [2024-07-20 18:09:30.301881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.712 qpair failed and we were unable to recover it. 00:33:55.712 [2024-07-20 18:09:30.311669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.712 [2024-07-20 18:09:30.311889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.712 [2024-07-20 18:09:30.311916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.311930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.311943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.311975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.321708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.321960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.321988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.322003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.322016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.322111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.331813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.332097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.332123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.332138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.332157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.332191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.341707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.341916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.341943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.341957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.341970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.342000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.351765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.352010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.352036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.352050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.352064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.352095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.361787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.362001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.362027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.362041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.362054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.362084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.371818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.372032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.372058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.372072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.372085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.372118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.381848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.382068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.382094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.382109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.382123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.382153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.391854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.392059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.392085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.392100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.392112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.392145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.401895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.402109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.402135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.402149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.402162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.402193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.411966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.412201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.412227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.412241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.412254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.412284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.421983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.422292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.422319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.422342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.422357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.422388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.431988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.432244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.432271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.432285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.432299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.432329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.442011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.442231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.442257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.442271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.442285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.442315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.452074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.452325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.452352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.452366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.452380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.452412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.462072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.462276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.462302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.462317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.462330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.462360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.472193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.472442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.472470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.472491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.472504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.472536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.482221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.713 [2024-07-20 18:09:30.482455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.713 [2024-07-20 18:09:30.482481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.713 [2024-07-20 18:09:30.482496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.713 [2024-07-20 18:09:30.482509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.713 [2024-07-20 18:09:30.482539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.713 qpair failed and we were unable to recover it. 00:33:55.713 [2024-07-20 18:09:30.492154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.714 [2024-07-20 18:09:30.492368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.714 [2024-07-20 18:09:30.492393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.714 [2024-07-20 18:09:30.492408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.714 [2024-07-20 18:09:30.492421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.714 [2024-07-20 18:09:30.492452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.714 qpair failed and we were unable to recover it. 00:33:55.714 [2024-07-20 18:09:30.502238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.714 [2024-07-20 18:09:30.502457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.714 [2024-07-20 18:09:30.502483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.714 [2024-07-20 18:09:30.502497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.714 [2024-07-20 18:09:30.502511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.714 [2024-07-20 18:09:30.502541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.714 qpair failed and we were unable to recover it. 00:33:55.971 [2024-07-20 18:09:30.512204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.971 [2024-07-20 18:09:30.512418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.971 [2024-07-20 18:09:30.512445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.971 [2024-07-20 18:09:30.512466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.971 [2024-07-20 18:09:30.512480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.971 [2024-07-20 18:09:30.512511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.971 qpair failed and we were unable to recover it. 00:33:55.971 [2024-07-20 18:09:30.522236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.971 [2024-07-20 18:09:30.522450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.971 [2024-07-20 18:09:30.522476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.971 [2024-07-20 18:09:30.522491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.971 [2024-07-20 18:09:30.522504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.971 [2024-07-20 18:09:30.522534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.971 qpair failed and we were unable to recover it. 00:33:55.971 [2024-07-20 18:09:30.532276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.971 [2024-07-20 18:09:30.532502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.971 [2024-07-20 18:09:30.532528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.971 [2024-07-20 18:09:30.532542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.971 [2024-07-20 18:09:30.532555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.971 [2024-07-20 18:09:30.532585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.971 qpair failed and we were unable to recover it. 00:33:55.971 [2024-07-20 18:09:30.542294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.971 [2024-07-20 18:09:30.542527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.971 [2024-07-20 18:09:30.542554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.971 [2024-07-20 18:09:30.542568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.971 [2024-07-20 18:09:30.542581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.971 [2024-07-20 18:09:30.542611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.971 qpair failed and we were unable to recover it. 00:33:55.971 [2024-07-20 18:09:30.552389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.971 [2024-07-20 18:09:30.552608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.971 [2024-07-20 18:09:30.552634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.971 [2024-07-20 18:09:30.552648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.971 [2024-07-20 18:09:30.552661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.971 [2024-07-20 18:09:30.552691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.971 qpair failed and we were unable to recover it. 00:33:55.971 [2024-07-20 18:09:30.562420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.971 [2024-07-20 18:09:30.562637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.971 [2024-07-20 18:09:30.562665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.971 [2024-07-20 18:09:30.562684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.971 [2024-07-20 18:09:30.562698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.971 [2024-07-20 18:09:30.562729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.971 qpair failed and we were unable to recover it. 00:33:55.971 [2024-07-20 18:09:30.572475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.971 [2024-07-20 18:09:30.572726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.971 [2024-07-20 18:09:30.572752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.971 [2024-07-20 18:09:30.572767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.971 [2024-07-20 18:09:30.572780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.971 [2024-07-20 18:09:30.572819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.971 qpair failed and we were unable to recover it. 00:33:55.971 [2024-07-20 18:09:30.582452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.971 [2024-07-20 18:09:30.582661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.971 [2024-07-20 18:09:30.582687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.971 [2024-07-20 18:09:30.582701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.971 [2024-07-20 18:09:30.582714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.971 [2024-07-20 18:09:30.582746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.971 qpair failed and we were unable to recover it. 00:33:55.971 [2024-07-20 18:09:30.592466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.971 [2024-07-20 18:09:30.592674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.971 [2024-07-20 18:09:30.592700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.971 [2024-07-20 18:09:30.592714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.971 [2024-07-20 18:09:30.592727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.971 [2024-07-20 18:09:30.592757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.971 qpair failed and we were unable to recover it. 00:33:55.971 [2024-07-20 18:09:30.602468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.971 [2024-07-20 18:09:30.602678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.971 [2024-07-20 18:09:30.602708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.971 [2024-07-20 18:09:30.602723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.971 [2024-07-20 18:09:30.602737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.971 [2024-07-20 18:09:30.602766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.971 qpair failed and we were unable to recover it. 00:33:55.971 [2024-07-20 18:09:30.612567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.971 [2024-07-20 18:09:30.612789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.971 [2024-07-20 18:09:30.612822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.971 [2024-07-20 18:09:30.612837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.971 [2024-07-20 18:09:30.612850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.971 [2024-07-20 18:09:30.612881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.971 qpair failed and we were unable to recover it. 00:33:55.971 [2024-07-20 18:09:30.622512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.971 [2024-07-20 18:09:30.622721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.971 [2024-07-20 18:09:30.622747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.622761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.622774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.622813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.632546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.632749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.632775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.632789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.632814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.632845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.642639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.642855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.642881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.642895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.642909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.642945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.652616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.652828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.652854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.652868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.652881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.652912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.662701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.662920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.662946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.662960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.662974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.663005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.672707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.672947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.672973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.672987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.673000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.673030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.682720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.682934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.682960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.682974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.682987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.683018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.692730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.692946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.692977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.692992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.693005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.693036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.702748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.702968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.702994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.703009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.703022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.703052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.712768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.712980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.713006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.713020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.713033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.713063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.722825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.723037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.723063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.723077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.723090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.723120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.732864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.733160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.733185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.733200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.733218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.733250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.742892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.743142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.743168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.743183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.743196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.743225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.752909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.753135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.753162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.753177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.753193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.753224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:55.972 [2024-07-20 18:09:30.762970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:55.972 [2024-07-20 18:09:30.763183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:55.972 [2024-07-20 18:09:30.763212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:55.972 [2024-07-20 18:09:30.763227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:55.972 [2024-07-20 18:09:30.763240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:55.972 [2024-07-20 18:09:30.763271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:55.972 qpair failed and we were unable to recover it. 00:33:56.230 [2024-07-20 18:09:30.773028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.230 [2024-07-20 18:09:30.773287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.230 [2024-07-20 18:09:30.773313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.230 [2024-07-20 18:09:30.773327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.230 [2024-07-20 18:09:30.773340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.230 [2024-07-20 18:09:30.773371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.230 qpair failed and we were unable to recover it. 00:33:56.230 [2024-07-20 18:09:30.782977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.230 [2024-07-20 18:09:30.783248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.230 [2024-07-20 18:09:30.783274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.230 [2024-07-20 18:09:30.783289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.230 [2024-07-20 18:09:30.783302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.230 [2024-07-20 18:09:30.783332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.230 qpair failed and we were unable to recover it. 00:33:56.230 [2024-07-20 18:09:30.793047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.230 [2024-07-20 18:09:30.793307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.230 [2024-07-20 18:09:30.793333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.230 [2024-07-20 18:09:30.793347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.230 [2024-07-20 18:09:30.793360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.793390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.803041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.803252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.803278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.803292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.803305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.803337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.813053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.813266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.813290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.813305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.813318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.813347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.823098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.823345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.823370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.823391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.823405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.823435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.833140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.833347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.833373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.833387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.833400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.833431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.843169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.843382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.843408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.843423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.843436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.843466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.853176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.853387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.853413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.853428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.853441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.853485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.863202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.863410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.863436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.863450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.863464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.863494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.873243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.873455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.873481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.873495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.873508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.873538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.883247] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.883450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.883474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.883488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.883501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.883531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.893284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.893495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.893521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.893536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.893550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.893594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.903340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.903584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.903611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.903625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.903639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.903671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.913346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.913555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.913581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.913602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.913617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.913648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.923350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.923551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.923578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.923592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.923605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.923635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.933407] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.933618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.933643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.933658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.933671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.933702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.943504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.943762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.943790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.943821] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.943836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.943882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.953442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.953643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.953669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.953684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.953697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.231 [2024-07-20 18:09:30.953729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.231 qpair failed and we were unable to recover it. 00:33:56.231 [2024-07-20 18:09:30.963456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.231 [2024-07-20 18:09:30.963659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.231 [2024-07-20 18:09:30.963685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.231 [2024-07-20 18:09:30.963699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.231 [2024-07-20 18:09:30.963713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.232 [2024-07-20 18:09:30.963743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.232 qpair failed and we were unable to recover it. 00:33:56.232 [2024-07-20 18:09:30.973500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.232 [2024-07-20 18:09:30.973760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.232 [2024-07-20 18:09:30.973786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.232 [2024-07-20 18:09:30.973809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.232 [2024-07-20 18:09:30.973825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.232 [2024-07-20 18:09:30.973856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.232 qpair failed and we were unable to recover it. 00:33:56.232 [2024-07-20 18:09:30.983522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.232 [2024-07-20 18:09:30.983732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.232 [2024-07-20 18:09:30.983758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.232 [2024-07-20 18:09:30.983773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.232 [2024-07-20 18:09:30.983786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.232 [2024-07-20 18:09:30.983838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.232 qpair failed and we were unable to recover it. 00:33:56.232 [2024-07-20 18:09:30.993539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.232 [2024-07-20 18:09:30.993742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.232 [2024-07-20 18:09:30.993768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.232 [2024-07-20 18:09:30.993782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.232 [2024-07-20 18:09:30.993804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.232 [2024-07-20 18:09:30.993839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.232 qpair failed and we were unable to recover it. 00:33:56.232 [2024-07-20 18:09:31.003584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.232 [2024-07-20 18:09:31.003803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.232 [2024-07-20 18:09:31.003834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.232 [2024-07-20 18:09:31.003849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.232 [2024-07-20 18:09:31.003863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.232 [2024-07-20 18:09:31.003895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.232 qpair failed and we were unable to recover it. 00:33:56.232 [2024-07-20 18:09:31.013636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.232 [2024-07-20 18:09:31.013858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.232 [2024-07-20 18:09:31.013884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.232 [2024-07-20 18:09:31.013899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.232 [2024-07-20 18:09:31.013912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.232 [2024-07-20 18:09:31.013942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.232 qpair failed and we were unable to recover it. 00:33:56.232 [2024-07-20 18:09:31.023725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.232 [2024-07-20 18:09:31.023952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.232 [2024-07-20 18:09:31.023979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.232 [2024-07-20 18:09:31.023993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.232 [2024-07-20 18:09:31.024006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.232 [2024-07-20 18:09:31.024037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.232 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.033660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.033878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.033905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.033919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.033933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.033963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.043683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.043896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.043923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.043938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.043951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.043987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.053744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.053995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.054021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.054035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.054048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.054079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.063757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.063984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.064010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.064025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.064038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.064068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.073778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.074016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.074042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.074056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.074070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.074100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.083807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.084027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.084053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.084067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.084081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.084113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.093867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.094116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.094147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.094162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.094175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.094206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.103901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.104128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.104154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.104169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.104182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.104227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.113895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.114105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.114131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.114145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.114157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.114187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.123907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.124110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.124137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.124151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.124164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.124194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.133970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.134183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.134209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.134223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.134242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.134274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.144010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.144223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.144252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.144271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.144284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.144315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.154064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.154286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.154312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.154327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.154340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.154370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.164069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.164284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.164310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.164325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.164338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.164368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.174133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.174375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.174401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.174415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.174428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.491 [2024-07-20 18:09:31.174458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.491 qpair failed and we were unable to recover it. 00:33:56.491 [2024-07-20 18:09:31.184106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.491 [2024-07-20 18:09:31.184328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.491 [2024-07-20 18:09:31.184354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.491 [2024-07-20 18:09:31.184369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.491 [2024-07-20 18:09:31.184382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.492 [2024-07-20 18:09:31.184425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.492 qpair failed and we were unable to recover it. 00:33:56.492 [2024-07-20 18:09:31.194126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.492 [2024-07-20 18:09:31.194348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.492 [2024-07-20 18:09:31.194375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.492 [2024-07-20 18:09:31.194389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.492 [2024-07-20 18:09:31.194402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.492 [2024-07-20 18:09:31.194432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.492 qpair failed and we were unable to recover it. 00:33:56.492 [2024-07-20 18:09:31.204170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.492 [2024-07-20 18:09:31.204406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.492 [2024-07-20 18:09:31.204432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.492 [2024-07-20 18:09:31.204447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.492 [2024-07-20 18:09:31.204460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.492 [2024-07-20 18:09:31.204491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.492 qpair failed and we were unable to recover it. 00:33:56.492 [2024-07-20 18:09:31.214205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.492 [2024-07-20 18:09:31.214476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.492 [2024-07-20 18:09:31.214502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.492 [2024-07-20 18:09:31.214516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.492 [2024-07-20 18:09:31.214530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.492 [2024-07-20 18:09:31.214561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.492 qpair failed and we were unable to recover it. 00:33:56.492 [2024-07-20 18:09:31.224218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.492 [2024-07-20 18:09:31.224472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.492 [2024-07-20 18:09:31.224498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.492 [2024-07-20 18:09:31.224512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.492 [2024-07-20 18:09:31.224531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.492 [2024-07-20 18:09:31.224564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.492 qpair failed and we were unable to recover it. 00:33:56.492 [2024-07-20 18:09:31.234238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.492 [2024-07-20 18:09:31.234452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.492 [2024-07-20 18:09:31.234478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.492 [2024-07-20 18:09:31.234493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.492 [2024-07-20 18:09:31.234506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.492 [2024-07-20 18:09:31.234535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.492 qpair failed and we were unable to recover it. 00:33:56.492 [2024-07-20 18:09:31.244262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.492 [2024-07-20 18:09:31.244475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.492 [2024-07-20 18:09:31.244501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.492 [2024-07-20 18:09:31.244515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.492 [2024-07-20 18:09:31.244529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.492 [2024-07-20 18:09:31.244560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.492 qpair failed and we were unable to recover it. 00:33:56.492 [2024-07-20 18:09:31.254363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.492 [2024-07-20 18:09:31.254641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.492 [2024-07-20 18:09:31.254668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.492 [2024-07-20 18:09:31.254682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.492 [2024-07-20 18:09:31.254695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.492 [2024-07-20 18:09:31.254724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.492 qpair failed and we were unable to recover it. 00:33:56.492 [2024-07-20 18:09:31.264341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.492 [2024-07-20 18:09:31.264547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.492 [2024-07-20 18:09:31.264573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.492 [2024-07-20 18:09:31.264587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.492 [2024-07-20 18:09:31.264601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.492 [2024-07-20 18:09:31.264630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.492 qpair failed and we were unable to recover it. 00:33:56.492 [2024-07-20 18:09:31.274402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.492 [2024-07-20 18:09:31.274613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.492 [2024-07-20 18:09:31.274640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.492 [2024-07-20 18:09:31.274655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.492 [2024-07-20 18:09:31.274668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.492 [2024-07-20 18:09:31.274697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.492 qpair failed and we were unable to recover it. 00:33:56.492 [2024-07-20 18:09:31.284423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.492 [2024-07-20 18:09:31.284636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.492 [2024-07-20 18:09:31.284662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.492 [2024-07-20 18:09:31.284677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.492 [2024-07-20 18:09:31.284691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.492 [2024-07-20 18:09:31.284733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.492 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.294468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.294680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.294706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.294721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.294734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.294765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.304545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.304755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.304781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.304804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.304819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.304850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.314544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.314800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.314826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.314850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.314864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.314895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.324508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.324723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.324750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.324765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.324778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.324820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.334544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.334755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.334781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.334803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.334818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.334849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.344568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.344776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.344809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.344825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.344838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.344868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.354607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.354863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.354891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.354910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.354925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.354957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.364635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.364887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.364914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.364929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.364942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.364972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.374658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.374884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.374921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.374936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.374949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.374980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.384717] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.384943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.384970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.384985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.385002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.385033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.394749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.394967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.394992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.395006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.395019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.395050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.404753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.404962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.404995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.405010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.405023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.405055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.414806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.415016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.415042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.415057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.415070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.415100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.424785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.425058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.425083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.425097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.425111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.425140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.434811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.435036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.435062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.435076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.435089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.750 [2024-07-20 18:09:31.435120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.750 qpair failed and we were unable to recover it. 00:33:56.750 [2024-07-20 18:09:31.444843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.750 [2024-07-20 18:09:31.445062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.750 [2024-07-20 18:09:31.445088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.750 [2024-07-20 18:09:31.445102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.750 [2024-07-20 18:09:31.445116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.751 [2024-07-20 18:09:31.445152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.751 qpair failed and we were unable to recover it. 00:33:56.751 [2024-07-20 18:09:31.454878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.751 [2024-07-20 18:09:31.455090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.751 [2024-07-20 18:09:31.455115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.751 [2024-07-20 18:09:31.455130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.751 [2024-07-20 18:09:31.455142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.751 [2024-07-20 18:09:31.455172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.751 qpair failed and we were unable to recover it. 00:33:56.751 [2024-07-20 18:09:31.464926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.751 [2024-07-20 18:09:31.465138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.751 [2024-07-20 18:09:31.465164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.751 [2024-07-20 18:09:31.465178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.751 [2024-07-20 18:09:31.465191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.751 [2024-07-20 18:09:31.465221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.751 qpair failed and we were unable to recover it. 00:33:56.751 [2024-07-20 18:09:31.474932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.751 [2024-07-20 18:09:31.475145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.751 [2024-07-20 18:09:31.475173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.751 [2024-07-20 18:09:31.475191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.751 [2024-07-20 18:09:31.475206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.751 [2024-07-20 18:09:31.475237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.751 qpair failed and we were unable to recover it. 00:33:56.751 [2024-07-20 18:09:31.485025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.751 [2024-07-20 18:09:31.485242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.751 [2024-07-20 18:09:31.485268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.751 [2024-07-20 18:09:31.485282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.751 [2024-07-20 18:09:31.485295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.751 [2024-07-20 18:09:31.485327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.751 qpair failed and we were unable to recover it. 00:33:56.751 [2024-07-20 18:09:31.494993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.751 [2024-07-20 18:09:31.495256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.751 [2024-07-20 18:09:31.495287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.751 [2024-07-20 18:09:31.495301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.751 [2024-07-20 18:09:31.495315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.751 [2024-07-20 18:09:31.495346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.751 qpair failed and we were unable to recover it. 00:33:56.751 [2024-07-20 18:09:31.505015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.751 [2024-07-20 18:09:31.505221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.751 [2024-07-20 18:09:31.505247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.751 [2024-07-20 18:09:31.505261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.751 [2024-07-20 18:09:31.505274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.751 [2024-07-20 18:09:31.505303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.751 qpair failed and we were unable to recover it. 00:33:56.751 [2024-07-20 18:09:31.515043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.751 [2024-07-20 18:09:31.515253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.751 [2024-07-20 18:09:31.515279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.751 [2024-07-20 18:09:31.515293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.751 [2024-07-20 18:09:31.515306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.751 [2024-07-20 18:09:31.515336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.751 qpair failed and we were unable to recover it. 00:33:56.751 [2024-07-20 18:09:31.525079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.751 [2024-07-20 18:09:31.525295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.751 [2024-07-20 18:09:31.525321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.751 [2024-07-20 18:09:31.525335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.751 [2024-07-20 18:09:31.525348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.751 [2024-07-20 18:09:31.525378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.751 qpair failed and we were unable to recover it. 00:33:56.751 [2024-07-20 18:09:31.535185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:56.751 [2024-07-20 18:09:31.535433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:56.751 [2024-07-20 18:09:31.535458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:56.751 [2024-07-20 18:09:31.535472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:56.751 [2024-07-20 18:09:31.535491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:56.751 [2024-07-20 18:09:31.535522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:56.751 qpair failed and we were unable to recover it. 00:33:57.008 [2024-07-20 18:09:31.545142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.008 [2024-07-20 18:09:31.545354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.008 [2024-07-20 18:09:31.545381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.008 [2024-07-20 18:09:31.545395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.008 [2024-07-20 18:09:31.545408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.008 [2024-07-20 18:09:31.545439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.008 qpair failed and we were unable to recover it. 00:33:57.008 [2024-07-20 18:09:31.555220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.008 [2024-07-20 18:09:31.555452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.008 [2024-07-20 18:09:31.555478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.008 [2024-07-20 18:09:31.555492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.008 [2024-07-20 18:09:31.555506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.555536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.565190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.565396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.565423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.565437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.565450] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.565481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.575240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.575501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.575528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.575547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.575562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.575593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.585252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.585505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.585531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.585546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.585559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.585589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.595306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.595517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.595543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.595557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.595570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.595600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.605328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.605540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.605567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.605586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.605600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.605632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.615346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.615572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.615599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.615613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.615626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.615656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.625382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.625590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.625616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.625631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.625650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.625680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.635395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.635601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.635626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.635640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.635653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.635683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.645459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.645671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.645697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.645711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.645725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.645755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.655440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.655663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.655688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.655702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.655716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.655746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.665516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.665763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.665789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.665812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.665826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.665856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.675500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.675715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.675741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.675755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.675769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.675818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.685583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.685844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.685870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.685884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.685897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.685940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.695547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.009 [2024-07-20 18:09:31.695760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.009 [2024-07-20 18:09:31.695786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.009 [2024-07-20 18:09:31.695808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.009 [2024-07-20 18:09:31.695823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.009 [2024-07-20 18:09:31.695853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.009 qpair failed and we were unable to recover it. 00:33:57.009 [2024-07-20 18:09:31.705583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.010 [2024-07-20 18:09:31.705817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.010 [2024-07-20 18:09:31.705844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.010 [2024-07-20 18:09:31.705858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.010 [2024-07-20 18:09:31.705871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.010 [2024-07-20 18:09:31.705903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.010 qpair failed and we were unable to recover it. 00:33:57.010 [2024-07-20 18:09:31.715642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.010 [2024-07-20 18:09:31.715855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.010 [2024-07-20 18:09:31.715881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.010 [2024-07-20 18:09:31.715902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.010 [2024-07-20 18:09:31.715916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.010 [2024-07-20 18:09:31.715946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.010 qpair failed and we were unable to recover it. 00:33:57.010 [2024-07-20 18:09:31.725658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.010 [2024-07-20 18:09:31.725922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.010 [2024-07-20 18:09:31.725949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.010 [2024-07-20 18:09:31.725964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.010 [2024-07-20 18:09:31.725977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.010 [2024-07-20 18:09:31.726007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.010 qpair failed and we were unable to recover it. 00:33:57.010 [2024-07-20 18:09:31.735736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.010 [2024-07-20 18:09:31.735967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.010 [2024-07-20 18:09:31.735995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.010 [2024-07-20 18:09:31.736015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.010 [2024-07-20 18:09:31.736029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.010 [2024-07-20 18:09:31.736060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.010 qpair failed and we were unable to recover it. 00:33:57.010 [2024-07-20 18:09:31.745697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.010 [2024-07-20 18:09:31.745909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.010 [2024-07-20 18:09:31.745936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.010 [2024-07-20 18:09:31.745950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.010 [2024-07-20 18:09:31.745963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.010 [2024-07-20 18:09:31.746001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.010 qpair failed and we were unable to recover it. 00:33:57.010 [2024-07-20 18:09:31.755739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.010 [2024-07-20 18:09:31.755944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.010 [2024-07-20 18:09:31.755970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.010 [2024-07-20 18:09:31.755985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.010 [2024-07-20 18:09:31.755998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.010 [2024-07-20 18:09:31.756029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.010 qpair failed and we were unable to recover it. 00:33:57.010 [2024-07-20 18:09:31.765779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.010 [2024-07-20 18:09:31.765995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.010 [2024-07-20 18:09:31.766021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.010 [2024-07-20 18:09:31.766036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.010 [2024-07-20 18:09:31.766049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.010 [2024-07-20 18:09:31.766079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.010 qpair failed and we were unable to recover it. 00:33:57.010 [2024-07-20 18:09:31.775809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.010 [2024-07-20 18:09:31.776018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.010 [2024-07-20 18:09:31.776044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.010 [2024-07-20 18:09:31.776058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.010 [2024-07-20 18:09:31.776071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.010 [2024-07-20 18:09:31.776101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.010 qpair failed and we were unable to recover it. 00:33:57.010 [2024-07-20 18:09:31.785847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.010 [2024-07-20 18:09:31.786065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.010 [2024-07-20 18:09:31.786091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.010 [2024-07-20 18:09:31.786106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.010 [2024-07-20 18:09:31.786119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.010 [2024-07-20 18:09:31.786148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.010 qpair failed and we were unable to recover it. 00:33:57.010 [2024-07-20 18:09:31.795870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.010 [2024-07-20 18:09:31.796081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.010 [2024-07-20 18:09:31.796106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.010 [2024-07-20 18:09:31.796121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.010 [2024-07-20 18:09:31.796134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.010 [2024-07-20 18:09:31.796163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.010 qpair failed and we were unable to recover it. 00:33:57.268 [2024-07-20 18:09:31.805887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.268 [2024-07-20 18:09:31.806103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.268 [2024-07-20 18:09:31.806135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.268 [2024-07-20 18:09:31.806151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.268 [2024-07-20 18:09:31.806164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.268 [2024-07-20 18:09:31.806194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.268 qpair failed and we were unable to recover it. 00:33:57.268 [2024-07-20 18:09:31.815934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.268 [2024-07-20 18:09:31.816148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.268 [2024-07-20 18:09:31.816174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.268 [2024-07-20 18:09:31.816191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.268 [2024-07-20 18:09:31.816204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.268 [2024-07-20 18:09:31.816235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.268 qpair failed and we were unable to recover it. 00:33:57.268 [2024-07-20 18:09:31.825963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.268 [2024-07-20 18:09:31.826215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.268 [2024-07-20 18:09:31.826241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.268 [2024-07-20 18:09:31.826255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.268 [2024-07-20 18:09:31.826268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.268 [2024-07-20 18:09:31.826299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.268 qpair failed and we were unable to recover it. 00:33:57.268 [2024-07-20 18:09:31.836009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.268 [2024-07-20 18:09:31.836226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.268 [2024-07-20 18:09:31.836253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.268 [2024-07-20 18:09:31.836272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.268 [2024-07-20 18:09:31.836286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.268 [2024-07-20 18:09:31.836318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.268 qpair failed and we were unable to recover it. 00:33:57.268 [2024-07-20 18:09:31.845989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.268 [2024-07-20 18:09:31.846193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.268 [2024-07-20 18:09:31.846219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.268 [2024-07-20 18:09:31.846233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.268 [2024-07-20 18:09:31.846245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.268 [2024-07-20 18:09:31.846281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.268 qpair failed and we were unable to recover it. 00:33:57.268 [2024-07-20 18:09:31.856028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.268 [2024-07-20 18:09:31.856245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.268 [2024-07-20 18:09:31.856271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.268 [2024-07-20 18:09:31.856285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.268 [2024-07-20 18:09:31.856298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.268 [2024-07-20 18:09:31.856328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.268 qpair failed and we were unable to recover it. 00:33:57.268 [2024-07-20 18:09:31.866055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.268 [2024-07-20 18:09:31.866265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.268 [2024-07-20 18:09:31.866290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.268 [2024-07-20 18:09:31.866304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.268 [2024-07-20 18:09:31.866317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.268 [2024-07-20 18:09:31.866348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.268 qpair failed and we were unable to recover it. 00:33:57.268 [2024-07-20 18:09:31.876064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:31.876268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:31.876294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:31.876308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:31.876321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:31.876352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:31.886110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:31.886312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:31.886337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:31.886351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:31.886363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:31.886394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:31.896128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:31.896346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:31.896377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:31.896393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:31.896406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:31.896436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:31.906135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:31.906342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:31.906367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:31.906381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:31.906394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:31.906425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:31.916192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:31.916402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:31.916427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:31.916441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:31.916454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:31.916484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:31.926254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:31.926464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:31.926490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:31.926504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:31.926517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:31.926547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:31.936269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:31.936510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:31.936536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:31.936550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:31.936563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:31.936600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:31.946267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:31.946475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:31.946501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:31.946515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:31.946528] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:31.946559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:31.956305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:31.956508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:31.956534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:31.956548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:31.956561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:31.956592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:31.966349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:31.966564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:31.966590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:31.966605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:31.966618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:31.966649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:31.976347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:31.976567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:31.976593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:31.976608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:31.976622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:31.976652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:31.986439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:31.986670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:31.986697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:31.986712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:31.986725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:31.986758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:31.996443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:31.996654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:31.996681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:31.996696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:31.996709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:31.996741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:32.006463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:32.006673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:32.006700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.269 [2024-07-20 18:09:32.006714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.269 [2024-07-20 18:09:32.006727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.269 [2024-07-20 18:09:32.006757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.269 qpair failed and we were unable to recover it. 00:33:57.269 [2024-07-20 18:09:32.016514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.269 [2024-07-20 18:09:32.016731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.269 [2024-07-20 18:09:32.016758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.270 [2024-07-20 18:09:32.016773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.270 [2024-07-20 18:09:32.016786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.270 [2024-07-20 18:09:32.016845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.270 qpair failed and we were unable to recover it. 00:33:57.270 [2024-07-20 18:09:32.026620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.270 [2024-07-20 18:09:32.026844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.270 [2024-07-20 18:09:32.026870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.270 [2024-07-20 18:09:32.026885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.270 [2024-07-20 18:09:32.026903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.270 [2024-07-20 18:09:32.026934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.270 qpair failed and we were unable to recover it. 00:33:57.270 [2024-07-20 18:09:32.036563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.270 [2024-07-20 18:09:32.036771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.270 [2024-07-20 18:09:32.036804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.270 [2024-07-20 18:09:32.036820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.270 [2024-07-20 18:09:32.036845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.270 [2024-07-20 18:09:32.036875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.270 qpair failed and we were unable to recover it. 00:33:57.270 [2024-07-20 18:09:32.046645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.270 [2024-07-20 18:09:32.046867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.270 [2024-07-20 18:09:32.046895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.270 [2024-07-20 18:09:32.046915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.270 [2024-07-20 18:09:32.046930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.270 [2024-07-20 18:09:32.046963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.270 qpair failed and we were unable to recover it. 00:33:57.270 [2024-07-20 18:09:32.056677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.270 [2024-07-20 18:09:32.056928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.270 [2024-07-20 18:09:32.056954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.270 [2024-07-20 18:09:32.056969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.270 [2024-07-20 18:09:32.056983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.270 [2024-07-20 18:09:32.057013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.270 qpair failed and we were unable to recover it. 00:33:57.528 [2024-07-20 18:09:32.066642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.528 [2024-07-20 18:09:32.066901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.528 [2024-07-20 18:09:32.066928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.528 [2024-07-20 18:09:32.066942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.528 [2024-07-20 18:09:32.066956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.528 [2024-07-20 18:09:32.066987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.528 qpair failed and we were unable to recover it. 00:33:57.528 [2024-07-20 18:09:32.076786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.528 [2024-07-20 18:09:32.077030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.528 [2024-07-20 18:09:32.077056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.528 [2024-07-20 18:09:32.077071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.528 [2024-07-20 18:09:32.077084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.528 [2024-07-20 18:09:32.077115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.528 qpair failed and we were unable to recover it. 00:33:57.528 [2024-07-20 18:09:32.086732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.528 [2024-07-20 18:09:32.086944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.528 [2024-07-20 18:09:32.086971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.528 [2024-07-20 18:09:32.086985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.528 [2024-07-20 18:09:32.086999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.528 [2024-07-20 18:09:32.087030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.528 qpair failed and we were unable to recover it. 00:33:57.528 [2024-07-20 18:09:32.096837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.528 [2024-07-20 18:09:32.097049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.528 [2024-07-20 18:09:32.097075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.528 [2024-07-20 18:09:32.097089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.528 [2024-07-20 18:09:32.097102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.528 [2024-07-20 18:09:32.097133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.528 qpair failed and we were unable to recover it. 00:33:57.528 [2024-07-20 18:09:32.106800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.528 [2024-07-20 18:09:32.107023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.528 [2024-07-20 18:09:32.107048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.528 [2024-07-20 18:09:32.107063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.528 [2024-07-20 18:09:32.107076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.528 [2024-07-20 18:09:32.107106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.528 qpair failed and we were unable to recover it. 00:33:57.528 [2024-07-20 18:09:32.116807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.528 [2024-07-20 18:09:32.117014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.528 [2024-07-20 18:09:32.117040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.528 [2024-07-20 18:09:32.117064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.528 [2024-07-20 18:09:32.117078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.528 [2024-07-20 18:09:32.117112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.528 qpair failed and we were unable to recover it. 00:33:57.528 [2024-07-20 18:09:32.126819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.528 [2024-07-20 18:09:32.127039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.528 [2024-07-20 18:09:32.127066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.528 [2024-07-20 18:09:32.127085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.528 [2024-07-20 18:09:32.127099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.528 [2024-07-20 18:09:32.127131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.528 qpair failed and we were unable to recover it. 00:33:57.528 [2024-07-20 18:09:32.136839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.528 [2024-07-20 18:09:32.137064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.528 [2024-07-20 18:09:32.137091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.137105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.137118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.137148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.146899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.147126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.147152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.147167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.147183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.147213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.156931] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.157135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.157161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.157176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.157189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.157218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.166974] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.167213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.167238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.167253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.167266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.167295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.176986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.177234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.177260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.177275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.177288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.177318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.187007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.187304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.187330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.187344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.187357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.187387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.197079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.197294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.197323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.197342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.197356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.197386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.207064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.207287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.207313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.207334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.207348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.207379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.217100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.217314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.217340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.217354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.217367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.217397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.227110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.227315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.227341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.227355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.227368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.227399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.237174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.237387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.237413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.237428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.237441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.237471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.247178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.247402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.247428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.247443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.247456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.247486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.257210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.257426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.257452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.257466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.257480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.257510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.267224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.267482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.267509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.267523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.267537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.529 [2024-07-20 18:09:32.267570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.529 qpair failed and we were unable to recover it. 00:33:57.529 [2024-07-20 18:09:32.277243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.529 [2024-07-20 18:09:32.277465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.529 [2024-07-20 18:09:32.277492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.529 [2024-07-20 18:09:32.277506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.529 [2024-07-20 18:09:32.277519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.530 [2024-07-20 18:09:32.277549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.530 qpair failed and we were unable to recover it. 00:33:57.530 [2024-07-20 18:09:32.287310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.530 [2024-07-20 18:09:32.287543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.530 [2024-07-20 18:09:32.287569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.530 [2024-07-20 18:09:32.287584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.530 [2024-07-20 18:09:32.287597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.530 [2024-07-20 18:09:32.287627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.530 qpair failed and we were unable to recover it. 00:33:57.530 [2024-07-20 18:09:32.297336] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.530 [2024-07-20 18:09:32.297554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.530 [2024-07-20 18:09:32.297585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.530 [2024-07-20 18:09:32.297600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.530 [2024-07-20 18:09:32.297613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.530 [2024-07-20 18:09:32.297644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.530 qpair failed and we were unable to recover it. 00:33:57.530 [2024-07-20 18:09:32.307373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.530 [2024-07-20 18:09:32.307613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.530 [2024-07-20 18:09:32.307639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.530 [2024-07-20 18:09:32.307654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.530 [2024-07-20 18:09:32.307667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.530 [2024-07-20 18:09:32.307698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.530 qpair failed and we were unable to recover it. 00:33:57.530 [2024-07-20 18:09:32.317348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.530 [2024-07-20 18:09:32.317552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.530 [2024-07-20 18:09:32.317578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.530 [2024-07-20 18:09:32.317593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.530 [2024-07-20 18:09:32.317606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.530 [2024-07-20 18:09:32.317636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.530 qpair failed and we were unable to recover it. 00:33:57.788 [2024-07-20 18:09:32.327424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.788 [2024-07-20 18:09:32.327783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.788 [2024-07-20 18:09:32.327821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.788 [2024-07-20 18:09:32.327836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.788 [2024-07-20 18:09:32.327849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.788 [2024-07-20 18:09:32.327880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.788 qpair failed and we were unable to recover it. 00:33:57.788 [2024-07-20 18:09:32.337447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.788 [2024-07-20 18:09:32.337678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.788 [2024-07-20 18:09:32.337704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.788 [2024-07-20 18:09:32.337719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.788 [2024-07-20 18:09:32.337732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.788 [2024-07-20 18:09:32.337782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.788 qpair failed and we were unable to recover it. 00:33:57.788 [2024-07-20 18:09:32.347444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.788 [2024-07-20 18:09:32.347692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.788 [2024-07-20 18:09:32.347718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.788 [2024-07-20 18:09:32.347732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.788 [2024-07-20 18:09:32.347745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.788 [2024-07-20 18:09:32.347775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.788 qpair failed and we were unable to recover it. 00:33:57.788 [2024-07-20 18:09:32.357488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.788 [2024-07-20 18:09:32.357692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.788 [2024-07-20 18:09:32.357720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.788 [2024-07-20 18:09:32.357735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.788 [2024-07-20 18:09:32.357749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.788 [2024-07-20 18:09:32.357778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.367490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.367702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.367727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.367741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.367755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.367785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.377533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.377762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.377788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.377811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.377827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.377859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.387566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.387777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.387818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.387836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.387850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.387882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.397570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.397790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.397825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.397839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.397853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.397883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.407599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.407814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.407840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.407854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.407867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.407898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.417674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.417891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.417917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.417932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.417945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.417975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.427716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.427971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.427997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.428011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.428030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.428061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.437708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.437965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.437991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.438005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.438018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.438049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.447736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.447946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.447972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.447986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.447999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.448029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.457768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.458026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.458052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.458067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.458080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.458110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.467816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.468035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.468061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.468075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.468088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.468118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.477828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.478068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.478093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.478108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.478121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.478150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.487859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.488083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.488109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.488123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.488136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.488167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.497882] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.789 [2024-07-20 18:09:32.498104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.789 [2024-07-20 18:09:32.498131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.789 [2024-07-20 18:09:32.498151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.789 [2024-07-20 18:09:32.498165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.789 [2024-07-20 18:09:32.498196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.789 qpair failed and we were unable to recover it. 00:33:57.789 [2024-07-20 18:09:32.507893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-07-20 18:09:32.508112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-07-20 18:09:32.508139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-07-20 18:09:32.508153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-07-20 18:09:32.508166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.790 [2024-07-20 18:09:32.508196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-07-20 18:09:32.517910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-07-20 18:09:32.518111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-07-20 18:09:32.518137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-07-20 18:09:32.518158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-07-20 18:09:32.518172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.790 [2024-07-20 18:09:32.518202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-07-20 18:09:32.527948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-07-20 18:09:32.528159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-07-20 18:09:32.528184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-07-20 18:09:32.528199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-07-20 18:09:32.528212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.790 [2024-07-20 18:09:32.528242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-07-20 18:09:32.537997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-07-20 18:09:32.538205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-07-20 18:09:32.538231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-07-20 18:09:32.538245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-07-20 18:09:32.538258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.790 [2024-07-20 18:09:32.538288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-07-20 18:09:32.548007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-07-20 18:09:32.548262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-07-20 18:09:32.548288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-07-20 18:09:32.548302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-07-20 18:09:32.548315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.790 [2024-07-20 18:09:32.548345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-07-20 18:09:32.558070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-07-20 18:09:32.558280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-07-20 18:09:32.558306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-07-20 18:09:32.558321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-07-20 18:09:32.558334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.790 [2024-07-20 18:09:32.558364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-07-20 18:09:32.568091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-07-20 18:09:32.568304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-07-20 18:09:32.568330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-07-20 18:09:32.568345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-07-20 18:09:32.568358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.790 [2024-07-20 18:09:32.568388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.790 qpair failed and we were unable to recover it. 00:33:57.790 [2024-07-20 18:09:32.578107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:57.790 [2024-07-20 18:09:32.578358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:57.790 [2024-07-20 18:09:32.578384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:57.790 [2024-07-20 18:09:32.578398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:57.790 [2024-07-20 18:09:32.578411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:57.790 [2024-07-20 18:09:32.578441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.790 qpair failed and we were unable to recover it. 00:33:58.049 [2024-07-20 18:09:32.588120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.049 [2024-07-20 18:09:32.588330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.049 [2024-07-20 18:09:32.588356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.049 [2024-07-20 18:09:32.588370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.049 [2024-07-20 18:09:32.588383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.049 [2024-07-20 18:09:32.588414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.049 qpair failed and we were unable to recover it. 00:33:58.049 [2024-07-20 18:09:32.598194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.049 [2024-07-20 18:09:32.598404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.049 [2024-07-20 18:09:32.598431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.049 [2024-07-20 18:09:32.598446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.049 [2024-07-20 18:09:32.598462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.049 [2024-07-20 18:09:32.598492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.049 qpair failed and we were unable to recover it. 00:33:58.049 [2024-07-20 18:09:32.608212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.049 [2024-07-20 18:09:32.608422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.049 [2024-07-20 18:09:32.608455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.049 [2024-07-20 18:09:32.608476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.049 [2024-07-20 18:09:32.608490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.049 [2024-07-20 18:09:32.608522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.049 qpair failed and we were unable to recover it. 00:33:58.049 [2024-07-20 18:09:32.618249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.049 [2024-07-20 18:09:32.618506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.049 [2024-07-20 18:09:32.618532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.049 [2024-07-20 18:09:32.618547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.049 [2024-07-20 18:09:32.618560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.049 [2024-07-20 18:09:32.618590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.049 qpair failed and we were unable to recover it. 00:33:58.049 [2024-07-20 18:09:32.628255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.049 [2024-07-20 18:09:32.628508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.049 [2024-07-20 18:09:32.628534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.049 [2024-07-20 18:09:32.628549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.049 [2024-07-20 18:09:32.628563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.049 [2024-07-20 18:09:32.628595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.049 qpair failed and we were unable to recover it. 00:33:58.049 [2024-07-20 18:09:32.638292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.049 [2024-07-20 18:09:32.638526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.049 [2024-07-20 18:09:32.638552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.049 [2024-07-20 18:09:32.638567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.049 [2024-07-20 18:09:32.638580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.049 [2024-07-20 18:09:32.638611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.049 qpair failed and we were unable to recover it. 00:33:58.049 [2024-07-20 18:09:32.648294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.049 [2024-07-20 18:09:32.648504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.049 [2024-07-20 18:09:32.648531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.049 [2024-07-20 18:09:32.648545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.049 [2024-07-20 18:09:32.648558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.049 [2024-07-20 18:09:32.648588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.049 qpair failed and we were unable to recover it. 00:33:58.049 [2024-07-20 18:09:32.658315] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.049 [2024-07-20 18:09:32.658538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.049 [2024-07-20 18:09:32.658563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.049 [2024-07-20 18:09:32.658578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.049 [2024-07-20 18:09:32.658592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.049 [2024-07-20 18:09:32.658622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.049 qpair failed and we were unable to recover it. 00:33:58.049 [2024-07-20 18:09:32.668363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.049 [2024-07-20 18:09:32.668580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.049 [2024-07-20 18:09:32.668606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.049 [2024-07-20 18:09:32.668621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.049 [2024-07-20 18:09:32.668634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.049 [2024-07-20 18:09:32.668664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.049 qpair failed and we were unable to recover it. 00:33:58.049 [2024-07-20 18:09:32.678414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.049 [2024-07-20 18:09:32.678626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.049 [2024-07-20 18:09:32.678652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.049 [2024-07-20 18:09:32.678667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.678680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.678710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.688423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.688631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.688657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.688671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.688684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.688714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.698491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.698733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.698763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.698778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.698799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.698833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.708479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.708716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.708742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.708756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.708770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.708808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.718495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.718718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.718743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.718758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.718771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.718808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.728542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.728775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.728811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.728827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.728840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.728870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.738568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.738826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.738852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.738867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.738880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.738916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.748585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.748810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.748837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.748851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.748864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.748896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.758632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.758857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.758883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.758898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.758911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.758942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.768656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.768880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.768906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.768920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.768933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.768963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.778695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.778907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.778933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.778948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.778961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.778990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.788695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.788914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.788947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.788962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.788975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.789004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.798735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.798954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.798980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.798995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.799008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.799039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.808762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.808977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.809003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.809017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.809030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.809060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.818831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.819045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.050 [2024-07-20 18:09:32.819072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.050 [2024-07-20 18:09:32.819086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.050 [2024-07-20 18:09:32.819099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.050 [2024-07-20 18:09:32.819129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.050 qpair failed and we were unable to recover it. 00:33:58.050 [2024-07-20 18:09:32.828818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.050 [2024-07-20 18:09:32.829021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.051 [2024-07-20 18:09:32.829047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.051 [2024-07-20 18:09:32.829061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.051 [2024-07-20 18:09:32.829080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.051 [2024-07-20 18:09:32.829113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.051 qpair failed and we were unable to recover it. 00:33:58.051 [2024-07-20 18:09:32.838875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.051 [2024-07-20 18:09:32.839087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.051 [2024-07-20 18:09:32.839113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.051 [2024-07-20 18:09:32.839128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.051 [2024-07-20 18:09:32.839141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.051 [2024-07-20 18:09:32.839171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.051 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-20 18:09:32.848872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.308 [2024-07-20 18:09:32.849132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.308 [2024-07-20 18:09:32.849158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.308 [2024-07-20 18:09:32.849172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.308 [2024-07-20 18:09:32.849185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.308 [2024-07-20 18:09:32.849215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-20 18:09:32.858918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.308 [2024-07-20 18:09:32.859136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.308 [2024-07-20 18:09:32.859162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.308 [2024-07-20 18:09:32.859176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.308 [2024-07-20 18:09:32.859192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.308 [2024-07-20 18:09:32.859223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-20 18:09:32.868949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.308 [2024-07-20 18:09:32.869166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.308 [2024-07-20 18:09:32.869191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.308 [2024-07-20 18:09:32.869205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.308 [2024-07-20 18:09:32.869220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.308 [2024-07-20 18:09:32.869251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-20 18:09:32.878967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.308 [2024-07-20 18:09:32.879179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.308 [2024-07-20 18:09:32.879206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.308 [2024-07-20 18:09:32.879220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.308 [2024-07-20 18:09:32.879234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.308 [2024-07-20 18:09:32.879263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-20 18:09:32.888998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.308 [2024-07-20 18:09:32.889206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.308 [2024-07-20 18:09:32.889230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.308 [2024-07-20 18:09:32.889244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.308 [2024-07-20 18:09:32.889257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.308 [2024-07-20 18:09:32.889287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-20 18:09:32.899014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.308 [2024-07-20 18:09:32.899279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.308 [2024-07-20 18:09:32.899305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.308 [2024-07-20 18:09:32.899319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.308 [2024-07-20 18:09:32.899332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.308 [2024-07-20 18:09:32.899362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-20 18:09:32.909060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.308 [2024-07-20 18:09:32.909274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.308 [2024-07-20 18:09:32.909299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.308 [2024-07-20 18:09:32.909313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.308 [2024-07-20 18:09:32.909326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:32.909357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:32.919082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:32.919295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:32.919321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:32.919335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:32.919354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:32.919385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:32.929114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:32.929372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:32.929397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:32.929411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:32.929425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:32.929454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:32.939196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:32.939413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:32.939438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:32.939453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:32.939466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:32.939496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:32.949192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:32.949408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:32.949435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:32.949455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:32.949469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:32.949500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:32.959186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:32.959392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:32.959418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:32.959432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:32.959446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:32.959476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:32.969216] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:32.969470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:32.969496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:32.969510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:32.969523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:32.969554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:32.979247] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:32.979460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:32.979486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:32.979501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:32.979514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:32.979545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:32.989269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:32.989493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:32.989519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:32.989533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:32.989545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:32.989575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:32.999296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:32.999513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:32.999539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:32.999553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:32.999567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:32.999597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:33.009311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:33.009517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:33.009543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:33.009564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:33.009577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:33.009608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:33.019392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:33.019599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:33.019626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:33.019640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:33.019653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:33.019683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:33.029369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:33.029596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:33.029621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:33.029635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:33.029649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:33.029679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:33.039382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:33.039585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:33.039610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:33.039624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:33.039638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:33.039669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:33.049421] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:33.049624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:33.049650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:33.049664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:33.049677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:33.049707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:33.059476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:33.059718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:33.059744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:33.059759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:33.059772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:33.059823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:33.069480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:33.069686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:33.069712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:33.069727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:33.069739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:33.069770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:33.079520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:33.079741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:33.079767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:33.079781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:33.079801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:33.079843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:33.089540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:33.089750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:33.089775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:33.089790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:33.089811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:33.089842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-20 18:09:33.099582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.309 [2024-07-20 18:09:33.099807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.309 [2024-07-20 18:09:33.099838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.309 [2024-07-20 18:09:33.099853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.309 [2024-07-20 18:09:33.099866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.309 [2024-07-20 18:09:33.099897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.579 [2024-07-20 18:09:33.109600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.579 [2024-07-20 18:09:33.109814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.579 [2024-07-20 18:09:33.109840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.579 [2024-07-20 18:09:33.109856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.579 [2024-07-20 18:09:33.109869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.579 [2024-07-20 18:09:33.109913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.579 qpair failed and we were unable to recover it. 00:33:58.579 [2024-07-20 18:09:33.119778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.579 [2024-07-20 18:09:33.120046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.579 [2024-07-20 18:09:33.120071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.579 [2024-07-20 18:09:33.120086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.579 [2024-07-20 18:09:33.120099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.579 [2024-07-20 18:09:33.120132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.579 qpair failed and we were unable to recover it. 00:33:58.579 [2024-07-20 18:09:33.129642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.579 [2024-07-20 18:09:33.129851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.579 [2024-07-20 18:09:33.129877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.579 [2024-07-20 18:09:33.129892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.579 [2024-07-20 18:09:33.129905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.579 [2024-07-20 18:09:33.129935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.579 qpair failed and we were unable to recover it. 00:33:58.579 [2024-07-20 18:09:33.139704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.579 [2024-07-20 18:09:33.139926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.579 [2024-07-20 18:09:33.139951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.579 [2024-07-20 18:09:33.139966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.579 [2024-07-20 18:09:33.139979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.580 [2024-07-20 18:09:33.140015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.580 qpair failed and we were unable to recover it. 00:33:58.580 [2024-07-20 18:09:33.149726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.580 [2024-07-20 18:09:33.149986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.580 [2024-07-20 18:09:33.150012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.580 [2024-07-20 18:09:33.150026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.580 [2024-07-20 18:09:33.150040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.580 [2024-07-20 18:09:33.150071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.580 qpair failed and we were unable to recover it. 00:33:58.580 [2024-07-20 18:09:33.159740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.580 [2024-07-20 18:09:33.159952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.580 [2024-07-20 18:09:33.159979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.580 [2024-07-20 18:09:33.159993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.580 [2024-07-20 18:09:33.160007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.580 [2024-07-20 18:09:33.160037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.580 qpair failed and we were unable to recover it. 00:33:58.580 [2024-07-20 18:09:33.169782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.580 [2024-07-20 18:09:33.170016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.580 [2024-07-20 18:09:33.170042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.580 [2024-07-20 18:09:33.170057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.580 [2024-07-20 18:09:33.170071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.580 [2024-07-20 18:09:33.170101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.580 qpair failed and we were unable to recover it. 00:33:58.580 [2024-07-20 18:09:33.179824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.580 [2024-07-20 18:09:33.180040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.580 [2024-07-20 18:09:33.180067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.580 [2024-07-20 18:09:33.180081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.580 [2024-07-20 18:09:33.180098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.580 [2024-07-20 18:09:33.180129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.580 qpair failed and we were unable to recover it. 00:33:58.580 [2024-07-20 18:09:33.189849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.580 [2024-07-20 18:09:33.190063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.580 [2024-07-20 18:09:33.190106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.580 [2024-07-20 18:09:33.190125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.580 [2024-07-20 18:09:33.190138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.580 [2024-07-20 18:09:33.190169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.580 qpair failed and we were unable to recover it. 00:33:58.580 [2024-07-20 18:09:33.199869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.580 [2024-07-20 18:09:33.200132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.580 [2024-07-20 18:09:33.200157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.580 [2024-07-20 18:09:33.200172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.580 [2024-07-20 18:09:33.200185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.580 [2024-07-20 18:09:33.200215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.580 qpair failed and we were unable to recover it. 00:33:58.580 [2024-07-20 18:09:33.209883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.580 [2024-07-20 18:09:33.210095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.580 [2024-07-20 18:09:33.210121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.580 [2024-07-20 18:09:33.210136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.580 [2024-07-20 18:09:33.210149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.580 [2024-07-20 18:09:33.210179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.580 qpair failed and we were unable to recover it. 00:33:58.580 [2024-07-20 18:09:33.219926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.580 [2024-07-20 18:09:33.220188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.580 [2024-07-20 18:09:33.220214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.580 [2024-07-20 18:09:33.220229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.580 [2024-07-20 18:09:33.220242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.580 [2024-07-20 18:09:33.220273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.580 qpair failed and we were unable to recover it. 00:33:58.580 [2024-07-20 18:09:33.229928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.580 [2024-07-20 18:09:33.230142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.580 [2024-07-20 18:09:33.230169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.580 [2024-07-20 18:09:33.230183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.580 [2024-07-20 18:09:33.230201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.580 [2024-07-20 18:09:33.230233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.580 qpair failed and we were unable to recover it. 00:33:58.580 [2024-07-20 18:09:33.239977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.580 [2024-07-20 18:09:33.240180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.580 [2024-07-20 18:09:33.240206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.580 [2024-07-20 18:09:33.240220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.580 [2024-07-20 18:09:33.240234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.580 [2024-07-20 18:09:33.240264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.580 qpair failed and we were unable to recover it. 00:33:58.580 [2024-07-20 18:09:33.249994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.580 [2024-07-20 18:09:33.250196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.580 [2024-07-20 18:09:33.250222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.580 [2024-07-20 18:09:33.250237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.580 [2024-07-20 18:09:33.250250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.580 [2024-07-20 18:09:33.250279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.580 qpair failed and we were unable to recover it. 00:33:58.580 [2024-07-20 18:09:33.260100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.580 [2024-07-20 18:09:33.260325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.580 [2024-07-20 18:09:33.260351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.580 [2024-07-20 18:09:33.260366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.580 [2024-07-20 18:09:33.260379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.580 [2024-07-20 18:09:33.260409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.580 qpair failed and we were unable to recover it. 00:33:58.581 [2024-07-20 18:09:33.270095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.581 [2024-07-20 18:09:33.270301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.581 [2024-07-20 18:09:33.270326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.581 [2024-07-20 18:09:33.270341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.581 [2024-07-20 18:09:33.270355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.581 [2024-07-20 18:09:33.270385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.581 qpair failed and we were unable to recover it. 00:33:58.581 [2024-07-20 18:09:33.280107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.581 [2024-07-20 18:09:33.280323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.581 [2024-07-20 18:09:33.280349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.581 [2024-07-20 18:09:33.280364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.581 [2024-07-20 18:09:33.280377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.581 [2024-07-20 18:09:33.280408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.581 qpair failed and we were unable to recover it. 00:33:58.581 [2024-07-20 18:09:33.290185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.581 [2024-07-20 18:09:33.290408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.581 [2024-07-20 18:09:33.290434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.581 [2024-07-20 18:09:33.290448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.581 [2024-07-20 18:09:33.290461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.581 [2024-07-20 18:09:33.290492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.581 qpair failed and we were unable to recover it. 00:33:58.581 [2024-07-20 18:09:33.300178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.581 [2024-07-20 18:09:33.300394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.581 [2024-07-20 18:09:33.300419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.581 [2024-07-20 18:09:33.300434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.581 [2024-07-20 18:09:33.300450] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.581 [2024-07-20 18:09:33.300480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.581 qpair failed and we were unable to recover it. 00:33:58.581 [2024-07-20 18:09:33.310177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.581 [2024-07-20 18:09:33.310382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.581 [2024-07-20 18:09:33.310408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.581 [2024-07-20 18:09:33.310422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.581 [2024-07-20 18:09:33.310435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.581 [2024-07-20 18:09:33.310465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.581 qpair failed and we were unable to recover it. 00:33:58.581 [2024-07-20 18:09:33.320213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.581 [2024-07-20 18:09:33.320465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.581 [2024-07-20 18:09:33.320491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.581 [2024-07-20 18:09:33.320505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.581 [2024-07-20 18:09:33.320524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.581 [2024-07-20 18:09:33.320555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.581 qpair failed and we were unable to recover it. 00:33:58.581 [2024-07-20 18:09:33.330215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.581 [2024-07-20 18:09:33.330427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.581 [2024-07-20 18:09:33.330453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.581 [2024-07-20 18:09:33.330467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.581 [2024-07-20 18:09:33.330481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.581 [2024-07-20 18:09:33.330510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.581 qpair failed and we were unable to recover it. 00:33:58.581 [2024-07-20 18:09:33.340307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.581 [2024-07-20 18:09:33.340558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.581 [2024-07-20 18:09:33.340584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.581 [2024-07-20 18:09:33.340598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.581 [2024-07-20 18:09:33.340611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.581 [2024-07-20 18:09:33.340641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.581 qpair failed and we were unable to recover it. 00:33:58.581 [2024-07-20 18:09:33.350356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.581 [2024-07-20 18:09:33.350566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.581 [2024-07-20 18:09:33.350593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.581 [2024-07-20 18:09:33.350607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.581 [2024-07-20 18:09:33.350620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.581 [2024-07-20 18:09:33.350663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.581 qpair failed and we were unable to recover it. 00:33:58.581 [2024-07-20 18:09:33.360328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.581 [2024-07-20 18:09:33.360536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.581 [2024-07-20 18:09:33.360562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.581 [2024-07-20 18:09:33.360577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.581 [2024-07-20 18:09:33.360590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.581 [2024-07-20 18:09:33.360621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.581 qpair failed and we were unable to recover it. 00:33:58.581 [2024-07-20 18:09:33.370474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.581 [2024-07-20 18:09:33.370709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.581 [2024-07-20 18:09:33.370735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.581 [2024-07-20 18:09:33.370750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.581 [2024-07-20 18:09:33.370763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.581 [2024-07-20 18:09:33.370801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.581 qpair failed and we were unable to recover it. 00:33:58.840 [2024-07-20 18:09:33.380381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.840 [2024-07-20 18:09:33.380598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.840 [2024-07-20 18:09:33.380625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.840 [2024-07-20 18:09:33.380640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.840 [2024-07-20 18:09:33.380653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.840 [2024-07-20 18:09:33.380686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.840 qpair failed and we were unable to recover it. 00:33:58.840 [2024-07-20 18:09:33.390406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.840 [2024-07-20 18:09:33.390661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.840 [2024-07-20 18:09:33.390688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.840 [2024-07-20 18:09:33.390703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.840 [2024-07-20 18:09:33.390716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.840 [2024-07-20 18:09:33.390748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.840 qpair failed and we were unable to recover it. 00:33:58.840 [2024-07-20 18:09:33.400477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.840 [2024-07-20 18:09:33.400733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.840 [2024-07-20 18:09:33.400759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.840 [2024-07-20 18:09:33.400773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.840 [2024-07-20 18:09:33.400787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.840 [2024-07-20 18:09:33.400836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.840 qpair failed and we were unable to recover it. 00:33:58.840 [2024-07-20 18:09:33.410488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.840 [2024-07-20 18:09:33.410697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.840 [2024-07-20 18:09:33.410723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.840 [2024-07-20 18:09:33.410744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.840 [2024-07-20 18:09:33.410757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.840 [2024-07-20 18:09:33.410788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.840 qpair failed and we were unable to recover it. 00:33:58.840 [2024-07-20 18:09:33.420481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.840 [2024-07-20 18:09:33.420690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.840 [2024-07-20 18:09:33.420716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.840 [2024-07-20 18:09:33.420730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.840 [2024-07-20 18:09:33.420743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.840 [2024-07-20 18:09:33.420773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.840 qpair failed and we were unable to recover it. 00:33:58.840 [2024-07-20 18:09:33.430547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.840 [2024-07-20 18:09:33.430763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.840 [2024-07-20 18:09:33.430790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.840 [2024-07-20 18:09:33.430815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.840 [2024-07-20 18:09:33.430829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.840 [2024-07-20 18:09:33.430860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.840 qpair failed and we were unable to recover it. 00:33:58.840 [2024-07-20 18:09:33.440585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.840 [2024-07-20 18:09:33.440808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.840 [2024-07-20 18:09:33.440834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.840 [2024-07-20 18:09:33.440848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.840 [2024-07-20 18:09:33.440862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.840 [2024-07-20 18:09:33.440892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.840 qpair failed and we were unable to recover it. 00:33:58.840 [2024-07-20 18:09:33.450665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.840 [2024-07-20 18:09:33.450920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.840 [2024-07-20 18:09:33.450946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.840 [2024-07-20 18:09:33.450961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.840 [2024-07-20 18:09:33.450974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.840 [2024-07-20 18:09:33.451004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.460642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.460878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.460904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.460918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.460932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.460963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.470634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.470857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.470883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.470897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.470911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.470941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.480692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.480913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.480939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.480953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.480966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.480998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.490680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.490887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.490915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.490929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.490942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.490974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.500721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.500942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.500973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.500989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.501003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.501036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.510758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.511011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.511038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.511052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.511066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.511095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.520849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.521064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.521099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.521117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.521129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.521164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.530799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.531005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.531032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.531047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.531060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.531090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.540847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.541099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.541125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.541139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.541152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.541191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.550856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.551070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.551096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.551110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.551123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.551153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.560888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.561111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.561137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.561151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.561164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.561194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.570937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.571140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.571166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.571181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.571194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.571223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.580954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.581180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.581206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.581220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.581233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.581264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.590971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.591179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.841 [2024-07-20 18:09:33.591211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.841 [2024-07-20 18:09:33.591226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.841 [2024-07-20 18:09:33.591239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.841 [2024-07-20 18:09:33.591268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.841 qpair failed and we were unable to recover it. 00:33:58.841 [2024-07-20 18:09:33.601016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.841 [2024-07-20 18:09:33.601227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.842 [2024-07-20 18:09:33.601253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.842 [2024-07-20 18:09:33.601268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.842 [2024-07-20 18:09:33.601281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.842 [2024-07-20 18:09:33.601312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.842 qpair failed and we were unable to recover it. 00:33:58.842 [2024-07-20 18:09:33.611046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.842 [2024-07-20 18:09:33.611253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.842 [2024-07-20 18:09:33.611279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.842 [2024-07-20 18:09:33.611293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.842 [2024-07-20 18:09:33.611306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.842 [2024-07-20 18:09:33.611336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.842 qpair failed and we were unable to recover it. 00:33:58.842 [2024-07-20 18:09:33.621080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.842 [2024-07-20 18:09:33.621294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.842 [2024-07-20 18:09:33.621320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.842 [2024-07-20 18:09:33.621334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.842 [2024-07-20 18:09:33.621347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.842 [2024-07-20 18:09:33.621378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.842 qpair failed and we were unable to recover it. 00:33:58.842 [2024-07-20 18:09:33.631096] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:58.842 [2024-07-20 18:09:33.631304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:58.842 [2024-07-20 18:09:33.631331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:58.842 [2024-07-20 18:09:33.631345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:58.842 [2024-07-20 18:09:33.631358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:58.842 [2024-07-20 18:09:33.631396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:58.842 qpair failed and we were unable to recover it. 00:33:59.100 [2024-07-20 18:09:33.641109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.100 [2024-07-20 18:09:33.641314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.100 [2024-07-20 18:09:33.641340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.100 [2024-07-20 18:09:33.641354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.100 [2024-07-20 18:09:33.641368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.100 [2024-07-20 18:09:33.641398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.100 qpair failed and we were unable to recover it. 00:33:59.100 [2024-07-20 18:09:33.651175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.100 [2024-07-20 18:09:33.651427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.100 [2024-07-20 18:09:33.651452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.100 [2024-07-20 18:09:33.651467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.100 [2024-07-20 18:09:33.651480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.100 [2024-07-20 18:09:33.651510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.100 qpair failed and we were unable to recover it. 00:33:59.100 [2024-07-20 18:09:33.661185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.100 [2024-07-20 18:09:33.661399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.100 [2024-07-20 18:09:33.661424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.100 [2024-07-20 18:09:33.661438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.100 [2024-07-20 18:09:33.661452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.100 [2024-07-20 18:09:33.661482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.100 qpair failed and we were unable to recover it. 00:33:59.100 [2024-07-20 18:09:33.671219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.100 [2024-07-20 18:09:33.671467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.100 [2024-07-20 18:09:33.671494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.671508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.671521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.671551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.681251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.101 [2024-07-20 18:09:33.681467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.101 [2024-07-20 18:09:33.681494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.681509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.681522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.681566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.691316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.101 [2024-07-20 18:09:33.691562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.101 [2024-07-20 18:09:33.691587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.691601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.691615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.691645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.701308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.101 [2024-07-20 18:09:33.701610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.101 [2024-07-20 18:09:33.701637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.701656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.701669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.701701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.711325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.101 [2024-07-20 18:09:33.711535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.101 [2024-07-20 18:09:33.711562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.711577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.711590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.711621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.721356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.101 [2024-07-20 18:09:33.721558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.101 [2024-07-20 18:09:33.721584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.721598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.721618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.721649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.731385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.101 [2024-07-20 18:09:33.731597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.101 [2024-07-20 18:09:33.731622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.731637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.731650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.731680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.741471] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.101 [2024-07-20 18:09:33.741722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.101 [2024-07-20 18:09:33.741748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.741762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.741774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.741812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.751429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.101 [2024-07-20 18:09:33.751645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.101 [2024-07-20 18:09:33.751671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.751685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.751699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.751729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.761486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.101 [2024-07-20 18:09:33.761699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.101 [2024-07-20 18:09:33.761725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.761739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.761753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.761784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.771533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.101 [2024-07-20 18:09:33.771798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.101 [2024-07-20 18:09:33.771824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.771839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.771852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.771882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.781552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.101 [2024-07-20 18:09:33.781766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.101 [2024-07-20 18:09:33.781799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.781817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.781832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.781864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.791556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.101 [2024-07-20 18:09:33.791765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.101 [2024-07-20 18:09:33.791791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.791813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.791827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.791857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.801563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.101 [2024-07-20 18:09:33.801770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.101 [2024-07-20 18:09:33.801805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.101 [2024-07-20 18:09:33.801823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.101 [2024-07-20 18:09:33.801837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.101 [2024-07-20 18:09:33.801866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.101 qpair failed and we were unable to recover it. 00:33:59.101 [2024-07-20 18:09:33.811603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.102 [2024-07-20 18:09:33.811818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.102 [2024-07-20 18:09:33.811844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.102 [2024-07-20 18:09:33.811865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.102 [2024-07-20 18:09:33.811878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.102 [2024-07-20 18:09:33.811909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.102 qpair failed and we were unable to recover it. 00:33:59.102 [2024-07-20 18:09:33.821653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.102 [2024-07-20 18:09:33.821869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.102 [2024-07-20 18:09:33.821894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.102 [2024-07-20 18:09:33.821908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.102 [2024-07-20 18:09:33.821922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.102 [2024-07-20 18:09:33.821952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.102 qpair failed and we were unable to recover it. 00:33:59.102 [2024-07-20 18:09:33.831669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.102 [2024-07-20 18:09:33.831898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.102 [2024-07-20 18:09:33.831924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.102 [2024-07-20 18:09:33.831938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.102 [2024-07-20 18:09:33.831951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.102 [2024-07-20 18:09:33.831981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.102 qpair failed and we were unable to recover it. 00:33:59.102 [2024-07-20 18:09:33.841700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.102 [2024-07-20 18:09:33.841913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.102 [2024-07-20 18:09:33.841938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.102 [2024-07-20 18:09:33.841952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.102 [2024-07-20 18:09:33.841966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.102 [2024-07-20 18:09:33.841996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.102 qpair failed and we were unable to recover it. 00:33:59.102 [2024-07-20 18:09:33.851751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.102 [2024-07-20 18:09:33.852007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.102 [2024-07-20 18:09:33.852032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.102 [2024-07-20 18:09:33.852047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.102 [2024-07-20 18:09:33.852059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.102 [2024-07-20 18:09:33.852089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.102 qpair failed and we were unable to recover it. 00:33:59.102 [2024-07-20 18:09:33.861766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.102 [2024-07-20 18:09:33.861990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.102 [2024-07-20 18:09:33.862016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.102 [2024-07-20 18:09:33.862031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.102 [2024-07-20 18:09:33.862044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.102 [2024-07-20 18:09:33.862075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.102 qpair failed and we were unable to recover it. 00:33:59.102 [2024-07-20 18:09:33.871766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.102 [2024-07-20 18:09:33.871983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.102 [2024-07-20 18:09:33.872009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.102 [2024-07-20 18:09:33.872023] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.102 [2024-07-20 18:09:33.872036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.102 [2024-07-20 18:09:33.872066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.102 qpair failed and we were unable to recover it. 00:33:59.102 [2024-07-20 18:09:33.881850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.102 [2024-07-20 18:09:33.882059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.102 [2024-07-20 18:09:33.882085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.102 [2024-07-20 18:09:33.882100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.102 [2024-07-20 18:09:33.882113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.102 [2024-07-20 18:09:33.882143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.102 qpair failed and we were unable to recover it. 00:33:59.102 [2024-07-20 18:09:33.891851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.102 [2024-07-20 18:09:33.892157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.102 [2024-07-20 18:09:33.892183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.102 [2024-07-20 18:09:33.892198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.102 [2024-07-20 18:09:33.892211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.102 [2024-07-20 18:09:33.892242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.102 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:33.901928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:33.902182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:33.902207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:33.902227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:33.902242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:33.902272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:33.911920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:33.912129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:33.912155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:33.912170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:33.912183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:33.912212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:33.921937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:33.922140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:33.922167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:33.922181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:33.922194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:33.922224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:33.931966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:33.932178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:33.932204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:33.932218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:33.932230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:33.932261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:33.942011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:33.942252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:33.942279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:33.942293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:33.942306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:33.942336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:33.952043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:33.952257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:33.952283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:33.952297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:33.952310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:33.952341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:33.962059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:33.962261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:33.962287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:33.962301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:33.962314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:33.962357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:33.972125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:33.972418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:33.972446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:33.972465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:33.972478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:33.972509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:33.982125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:33.982333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:33.982360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:33.982374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:33.982387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:33.982418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:33.992148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:33.992378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:33.992410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:33.992425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:33.992438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:33.992468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:34.002197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:34.002412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:34.002438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:34.002453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:34.002466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:34.002496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:34.012184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:34.012404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:34.012430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:34.012444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:34.012457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:34.012488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:34.022225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:34.022445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:34.022471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:34.022486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:34.022499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:34.022530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:34.032308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:34.032565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:34.032590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:34.032605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:34.032618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:34.032655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:34.042285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:34.042498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:34.042524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:34.042538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:34.042551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:34.042583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:34.052349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:34.052559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:34.052586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:34.052600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:34.052613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.361 [2024-07-20 18:09:34.052644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.361 qpair failed and we were unable to recover it. 00:33:59.361 [2024-07-20 18:09:34.062375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.361 [2024-07-20 18:09:34.062624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.361 [2024-07-20 18:09:34.062650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.361 [2024-07-20 18:09:34.062665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.361 [2024-07-20 18:09:34.062678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.362 [2024-07-20 18:09:34.062708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.362 qpair failed and we were unable to recover it. 00:33:59.362 [2024-07-20 18:09:34.072398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.362 [2024-07-20 18:09:34.072651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.362 [2024-07-20 18:09:34.072676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.362 [2024-07-20 18:09:34.072690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.362 [2024-07-20 18:09:34.072704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.362 [2024-07-20 18:09:34.072734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.362 qpair failed and we were unable to recover it. 00:33:59.362 [2024-07-20 18:09:34.082435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.362 [2024-07-20 18:09:34.082644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.362 [2024-07-20 18:09:34.082675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.362 [2024-07-20 18:09:34.082690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.362 [2024-07-20 18:09:34.082704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.362 [2024-07-20 18:09:34.082734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.362 qpair failed and we were unable to recover it. 00:33:59.362 [2024-07-20 18:09:34.092524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.362 [2024-07-20 18:09:34.092749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.362 [2024-07-20 18:09:34.092778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.362 [2024-07-20 18:09:34.092802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.362 [2024-07-20 18:09:34.092817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.362 [2024-07-20 18:09:34.092849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.362 qpair failed and we were unable to recover it. 00:33:59.362 [2024-07-20 18:09:34.102527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.362 [2024-07-20 18:09:34.102739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.362 [2024-07-20 18:09:34.102765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.362 [2024-07-20 18:09:34.102779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.362 [2024-07-20 18:09:34.102799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.362 [2024-07-20 18:09:34.102832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.362 qpair failed and we were unable to recover it. 00:33:59.362 [2024-07-20 18:09:34.112565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.362 [2024-07-20 18:09:34.112821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.362 [2024-07-20 18:09:34.112847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.362 [2024-07-20 18:09:34.112861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.362 [2024-07-20 18:09:34.112874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.362 [2024-07-20 18:09:34.112904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.362 qpair failed and we were unable to recover it. 00:33:59.362 [2024-07-20 18:09:34.122582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.362 [2024-07-20 18:09:34.122801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.362 [2024-07-20 18:09:34.122827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.362 [2024-07-20 18:09:34.122841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.362 [2024-07-20 18:09:34.122860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.362 [2024-07-20 18:09:34.122892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.362 qpair failed and we were unable to recover it. 00:33:59.362 [2024-07-20 18:09:34.132580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.362 [2024-07-20 18:09:34.132849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.362 [2024-07-20 18:09:34.132875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.362 [2024-07-20 18:09:34.132889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.362 [2024-07-20 18:09:34.132903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.362 [2024-07-20 18:09:34.132933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.362 qpair failed and we were unable to recover it. 00:33:59.362 [2024-07-20 18:09:34.142648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.362 [2024-07-20 18:09:34.142873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.362 [2024-07-20 18:09:34.142899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.362 [2024-07-20 18:09:34.142913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.362 [2024-07-20 18:09:34.142927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.362 [2024-07-20 18:09:34.142957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.362 qpair failed and we were unable to recover it. 00:33:59.362 [2024-07-20 18:09:34.152672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.362 [2024-07-20 18:09:34.152891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.362 [2024-07-20 18:09:34.152917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.362 [2024-07-20 18:09:34.152932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.362 [2024-07-20 18:09:34.152945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.362 [2024-07-20 18:09:34.152975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.362 qpair failed and we were unable to recover it. 00:33:59.620 [2024-07-20 18:09:34.162632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.620 [2024-07-20 18:09:34.162846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.620 [2024-07-20 18:09:34.162873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.620 [2024-07-20 18:09:34.162888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.620 [2024-07-20 18:09:34.162901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.620 [2024-07-20 18:09:34.162932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.620 qpair failed and we were unable to recover it. 00:33:59.620 [2024-07-20 18:09:34.172746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.620 [2024-07-20 18:09:34.173025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.620 [2024-07-20 18:09:34.173053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.620 [2024-07-20 18:09:34.173068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.620 [2024-07-20 18:09:34.173085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.620 [2024-07-20 18:09:34.173119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.620 qpair failed and we were unable to recover it. 00:33:59.620 [2024-07-20 18:09:34.182703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.620 [2024-07-20 18:09:34.182937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.620 [2024-07-20 18:09:34.182964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.620 [2024-07-20 18:09:34.182978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.620 [2024-07-20 18:09:34.182992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.620 [2024-07-20 18:09:34.183022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.620 qpair failed and we were unable to recover it. 00:33:59.620 [2024-07-20 18:09:34.192743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.620 [2024-07-20 18:09:34.192959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.620 [2024-07-20 18:09:34.192985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.620 [2024-07-20 18:09:34.193000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.620 [2024-07-20 18:09:34.193013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.620 [2024-07-20 18:09:34.193044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.620 qpair failed and we were unable to recover it. 00:33:59.620 [2024-07-20 18:09:34.202779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.620 [2024-07-20 18:09:34.202990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.620 [2024-07-20 18:09:34.203017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.620 [2024-07-20 18:09:34.203032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.620 [2024-07-20 18:09:34.203045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.620 [2024-07-20 18:09:34.203075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.620 qpair failed and we were unable to recover it. 00:33:59.620 [2024-07-20 18:09:34.212845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.620 [2024-07-20 18:09:34.213052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.620 [2024-07-20 18:09:34.213078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.620 [2024-07-20 18:09:34.213098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.620 [2024-07-20 18:09:34.213112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.620 [2024-07-20 18:09:34.213143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.620 qpair failed and we were unable to recover it. 00:33:59.620 [2024-07-20 18:09:34.222845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.620 [2024-07-20 18:09:34.223054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.620 [2024-07-20 18:09:34.223080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.620 [2024-07-20 18:09:34.223094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.620 [2024-07-20 18:09:34.223107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.620 [2024-07-20 18:09:34.223138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.620 qpair failed and we were unable to recover it. 00:33:59.620 [2024-07-20 18:09:34.232845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.620 [2024-07-20 18:09:34.233066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.620 [2024-07-20 18:09:34.233092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.620 [2024-07-20 18:09:34.233107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.620 [2024-07-20 18:09:34.233121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.620 [2024-07-20 18:09:34.233151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.620 qpair failed and we were unable to recover it. 00:33:59.620 [2024-07-20 18:09:34.242883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.620 [2024-07-20 18:09:34.243092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.620 [2024-07-20 18:09:34.243118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.620 [2024-07-20 18:09:34.243133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.620 [2024-07-20 18:09:34.243147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.620 [2024-07-20 18:09:34.243177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.620 qpair failed and we were unable to recover it. 00:33:59.620 [2024-07-20 18:09:34.252890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.620 [2024-07-20 18:09:34.253101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.620 [2024-07-20 18:09:34.253127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.620 [2024-07-20 18:09:34.253141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.620 [2024-07-20 18:09:34.253155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.620 [2024-07-20 18:09:34.253185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.620 qpair failed and we were unable to recover it. 00:33:59.620 [2024-07-20 18:09:34.262937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.620 [2024-07-20 18:09:34.263146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.620 [2024-07-20 18:09:34.263173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.620 [2024-07-20 18:09:34.263187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.620 [2024-07-20 18:09:34.263200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.620 [2024-07-20 18:09:34.263231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.620 qpair failed and we were unable to recover it. 00:33:59.620 [2024-07-20 18:09:34.272944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.273154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.273179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.273194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.273207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.273236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.282970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.283168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.283194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.283209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.283224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.283255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.293024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.293230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.293256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.293270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.293283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.293314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.303055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.303279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.303304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.303326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.303340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.303371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.313091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.313348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.313374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.313389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.313402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.313432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.323098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.323308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.323334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.323353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.323366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.323397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.333124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.333332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.333358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.333373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.333386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.333416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.343182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.343391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.343416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.343431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.343444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.343474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.353191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.353395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.353421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.353435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.353449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.353479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.363203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.363427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.363452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.363467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.363480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.363510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.373245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.373452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.373478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.373492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.373504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.373534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.383273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.383491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.383516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.383530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.383543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.383575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.393314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.393523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.393557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.393573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.393586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.393617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.403317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.403521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.403547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.403561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.403575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.403604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.621 [2024-07-20 18:09:34.413405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.621 [2024-07-20 18:09:34.413616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.621 [2024-07-20 18:09:34.413642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.621 [2024-07-20 18:09:34.413720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.621 [2024-07-20 18:09:34.413735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.621 [2024-07-20 18:09:34.413766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.621 qpair failed and we were unable to recover it. 00:33:59.880 [2024-07-20 18:09:34.423384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.880 [2024-07-20 18:09:34.423598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.880 [2024-07-20 18:09:34.423625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.880 [2024-07-20 18:09:34.423639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.880 [2024-07-20 18:09:34.423652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.880 [2024-07-20 18:09:34.423683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.880 qpair failed and we were unable to recover it. 00:33:59.880 [2024-07-20 18:09:34.433450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.880 [2024-07-20 18:09:34.433666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.880 [2024-07-20 18:09:34.433692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.880 [2024-07-20 18:09:34.433707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.880 [2024-07-20 18:09:34.433723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.880 [2024-07-20 18:09:34.433760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.880 qpair failed and we were unable to recover it. 00:33:59.880 [2024-07-20 18:09:34.443463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.880 [2024-07-20 18:09:34.443667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.880 [2024-07-20 18:09:34.443694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.880 [2024-07-20 18:09:34.443708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.880 [2024-07-20 18:09:34.443721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.880 [2024-07-20 18:09:34.443752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.880 qpair failed and we were unable to recover it. 00:33:59.880 [2024-07-20 18:09:34.453498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.880 [2024-07-20 18:09:34.453708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.880 [2024-07-20 18:09:34.453736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.880 [2024-07-20 18:09:34.453752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.880 [2024-07-20 18:09:34.453765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.880 [2024-07-20 18:09:34.453804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.880 qpair failed and we were unable to recover it. 00:33:59.880 [2024-07-20 18:09:34.463495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.880 [2024-07-20 18:09:34.463726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.880 [2024-07-20 18:09:34.463752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.880 [2024-07-20 18:09:34.463766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.880 [2024-07-20 18:09:34.463780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.880 [2024-07-20 18:09:34.463817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.880 qpair failed and we were unable to recover it. 00:33:59.880 [2024-07-20 18:09:34.473516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.880 [2024-07-20 18:09:34.473724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.880 [2024-07-20 18:09:34.473750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.880 [2024-07-20 18:09:34.473764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.880 [2024-07-20 18:09:34.473777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.880 [2024-07-20 18:09:34.473815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.880 qpair failed and we were unable to recover it. 00:33:59.880 [2024-07-20 18:09:34.483557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.880 [2024-07-20 18:09:34.483774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.880 [2024-07-20 18:09:34.483813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.880 [2024-07-20 18:09:34.483828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.880 [2024-07-20 18:09:34.483842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.880 [2024-07-20 18:09:34.483872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.880 qpair failed and we were unable to recover it. 00:33:59.880 [2024-07-20 18:09:34.493572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.880 [2024-07-20 18:09:34.493807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.880 [2024-07-20 18:09:34.493832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.880 [2024-07-20 18:09:34.493846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.880 [2024-07-20 18:09:34.493859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.880 [2024-07-20 18:09:34.493889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.880 qpair failed and we were unable to recover it. 00:33:59.880 [2024-07-20 18:09:34.503617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.880 [2024-07-20 18:09:34.503831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.880 [2024-07-20 18:09:34.503857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.503871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.503885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.503915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.513628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.513843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.513869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.513883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.513896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.513926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.523685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.523905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.523932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.523946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.523964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.523995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.533714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.533930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.533957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.533971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.533984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.534014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.543718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.543934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.543960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.543975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.543988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.544019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.553813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.554076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.554103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.554117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.554130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.554160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.563807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.564014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.564040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.564055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.564068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.564099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.573807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.574057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.574084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.574099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.574113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.574143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.583862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.584076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.584102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.584116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.584129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.584161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.593870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.594092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.594116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.594131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.594144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.594174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.603915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.604124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.604150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.604165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.604178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.604222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.613904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.614131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.614157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.614171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.614190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.614222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.623952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.624174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.624200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.624214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.624227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.624258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.633958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.634166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.634192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.634206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.634219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.881 [2024-07-20 18:09:34.634249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.881 qpair failed and we were unable to recover it. 00:33:59.881 [2024-07-20 18:09:34.644031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.881 [2024-07-20 18:09:34.644236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.881 [2024-07-20 18:09:34.644262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.881 [2024-07-20 18:09:34.644276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.881 [2024-07-20 18:09:34.644289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.882 [2024-07-20 18:09:34.644321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.882 qpair failed and we were unable to recover it. 00:33:59.882 [2024-07-20 18:09:34.654031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.882 [2024-07-20 18:09:34.654242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.882 [2024-07-20 18:09:34.654268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.882 [2024-07-20 18:09:34.654283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.882 [2024-07-20 18:09:34.654296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.882 [2024-07-20 18:09:34.654326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.882 qpair failed and we were unable to recover it. 00:33:59.882 [2024-07-20 18:09:34.664065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.882 [2024-07-20 18:09:34.664286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.882 [2024-07-20 18:09:34.664312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.882 [2024-07-20 18:09:34.664327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.882 [2024-07-20 18:09:34.664340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.882 [2024-07-20 18:09:34.664370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.882 qpair failed and we were unable to recover it. 00:33:59.882 [2024-07-20 18:09:34.674117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.882 [2024-07-20 18:09:34.674434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.882 [2024-07-20 18:09:34.674460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.882 [2024-07-20 18:09:34.674475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.882 [2024-07-20 18:09:34.674487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:33:59.882 [2024-07-20 18:09:34.674518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.882 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.684099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.684309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.141 [2024-07-20 18:09:34.684335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.141 [2024-07-20 18:09:34.684350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.141 [2024-07-20 18:09:34.684363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.141 [2024-07-20 18:09:34.684394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.141 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.694119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.694324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.141 [2024-07-20 18:09:34.694350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.141 [2024-07-20 18:09:34.694364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.141 [2024-07-20 18:09:34.694377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.141 [2024-07-20 18:09:34.694408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.141 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.704181] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.704432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.141 [2024-07-20 18:09:34.704457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.141 [2024-07-20 18:09:34.704477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.141 [2024-07-20 18:09:34.704492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.141 [2024-07-20 18:09:34.704522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.141 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.714219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.714471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.141 [2024-07-20 18:09:34.714496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.141 [2024-07-20 18:09:34.714510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.141 [2024-07-20 18:09:34.714524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.141 [2024-07-20 18:09:34.714554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.141 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.724256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.724511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.141 [2024-07-20 18:09:34.724537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.141 [2024-07-20 18:09:34.724551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.141 [2024-07-20 18:09:34.724564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.141 [2024-07-20 18:09:34.724595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.141 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.734262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.734517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.141 [2024-07-20 18:09:34.734544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.141 [2024-07-20 18:09:34.734558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.141 [2024-07-20 18:09:34.734571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.141 [2024-07-20 18:09:34.734601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.141 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.744301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.744507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.141 [2024-07-20 18:09:34.744532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.141 [2024-07-20 18:09:34.744547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.141 [2024-07-20 18:09:34.744560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.141 [2024-07-20 18:09:34.744589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.141 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.754350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.754564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.141 [2024-07-20 18:09:34.754593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.141 [2024-07-20 18:09:34.754609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.141 [2024-07-20 18:09:34.754622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.141 [2024-07-20 18:09:34.754653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.141 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.764353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.764572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.141 [2024-07-20 18:09:34.764598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.141 [2024-07-20 18:09:34.764612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.141 [2024-07-20 18:09:34.764625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.141 [2024-07-20 18:09:34.764655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.141 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.774369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.774582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.141 [2024-07-20 18:09:34.774607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.141 [2024-07-20 18:09:34.774621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.141 [2024-07-20 18:09:34.774634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.141 [2024-07-20 18:09:34.774665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.141 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.784439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.784680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.141 [2024-07-20 18:09:34.784705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.141 [2024-07-20 18:09:34.784719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.141 [2024-07-20 18:09:34.784732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.141 [2024-07-20 18:09:34.784763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.141 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.794458] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.794664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.141 [2024-07-20 18:09:34.794695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.141 [2024-07-20 18:09:34.794710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.141 [2024-07-20 18:09:34.794725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.141 [2024-07-20 18:09:34.794755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.141 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.804510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.804776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.141 [2024-07-20 18:09:34.804809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.141 [2024-07-20 18:09:34.804825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.141 [2024-07-20 18:09:34.804838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.141 [2024-07-20 18:09:34.804870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.141 qpair failed and we were unable to recover it. 00:34:00.141 [2024-07-20 18:09:34.814516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.141 [2024-07-20 18:09:34.814722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.142 [2024-07-20 18:09:34.814748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.142 [2024-07-20 18:09:34.814762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.142 [2024-07-20 18:09:34.814776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.142 [2024-07-20 18:09:34.814814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.142 qpair failed and we were unable to recover it. 00:34:00.142 [2024-07-20 18:09:34.824536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.142 [2024-07-20 18:09:34.824750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.142 [2024-07-20 18:09:34.824776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.142 [2024-07-20 18:09:34.824791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.142 [2024-07-20 18:09:34.824815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.142 [2024-07-20 18:09:34.824846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.142 qpair failed and we were unable to recover it. 00:34:00.142 [2024-07-20 18:09:34.834543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.142 [2024-07-20 18:09:34.834748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.142 [2024-07-20 18:09:34.834774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.142 [2024-07-20 18:09:34.834788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.142 [2024-07-20 18:09:34.834809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.142 [2024-07-20 18:09:34.834846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.142 qpair failed and we were unable to recover it. 00:34:00.142 [2024-07-20 18:09:34.844579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.142 [2024-07-20 18:09:34.844790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.142 [2024-07-20 18:09:34.844823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.142 [2024-07-20 18:09:34.844838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.142 [2024-07-20 18:09:34.844851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.142 [2024-07-20 18:09:34.844881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.142 qpair failed and we were unable to recover it. 00:34:00.142 [2024-07-20 18:09:34.854590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.142 [2024-07-20 18:09:34.854806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.142 [2024-07-20 18:09:34.854832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.142 [2024-07-20 18:09:34.854847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.142 [2024-07-20 18:09:34.854859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.142 [2024-07-20 18:09:34.854888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.142 qpair failed and we were unable to recover it. 00:34:00.142 [2024-07-20 18:09:34.864674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.142 [2024-07-20 18:09:34.864912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.142 [2024-07-20 18:09:34.864938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.142 [2024-07-20 18:09:34.864952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.142 [2024-07-20 18:09:34.864965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.142 [2024-07-20 18:09:34.864997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.142 qpair failed and we were unable to recover it. 00:34:00.142 [2024-07-20 18:09:34.874698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.142 [2024-07-20 18:09:34.874937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.142 [2024-07-20 18:09:34.874963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.142 [2024-07-20 18:09:34.874978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.142 [2024-07-20 18:09:34.874993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.142 [2024-07-20 18:09:34.875027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.142 qpair failed and we were unable to recover it. 00:34:00.142 [2024-07-20 18:09:34.884715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.142 [2024-07-20 18:09:34.884924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.142 [2024-07-20 18:09:34.884955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.142 [2024-07-20 18:09:34.884970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.142 [2024-07-20 18:09:34.884984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.142 [2024-07-20 18:09:34.885014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.142 qpair failed and we were unable to recover it. 00:34:00.142 [2024-07-20 18:09:34.894703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.142 [2024-07-20 18:09:34.894920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.142 [2024-07-20 18:09:34.894945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.142 [2024-07-20 18:09:34.894959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.142 [2024-07-20 18:09:34.894973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.142 [2024-07-20 18:09:34.895003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.142 qpair failed and we were unable to recover it. 00:34:00.142 [2024-07-20 18:09:34.904800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.142 [2024-07-20 18:09:34.905019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.142 [2024-07-20 18:09:34.905045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.142 [2024-07-20 18:09:34.905060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.142 [2024-07-20 18:09:34.905073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.142 [2024-07-20 18:09:34.905103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.142 qpair failed and we were unable to recover it. 00:34:00.142 [2024-07-20 18:09:34.914783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.142 [2024-07-20 18:09:34.915002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.142 [2024-07-20 18:09:34.915028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.142 [2024-07-20 18:09:34.915042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.142 [2024-07-20 18:09:34.915055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.142 [2024-07-20 18:09:34.915085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.142 qpair failed and we were unable to recover it. 00:34:00.142 [2024-07-20 18:09:34.924859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.142 [2024-07-20 18:09:34.925073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.142 [2024-07-20 18:09:34.925098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.142 [2024-07-20 18:09:34.925113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.142 [2024-07-20 18:09:34.925131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.142 [2024-07-20 18:09:34.925164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.142 qpair failed and we were unable to recover it. 00:34:00.142 [2024-07-20 18:09:34.934844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.142 [2024-07-20 18:09:34.935191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.142 [2024-07-20 18:09:34.935217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.142 [2024-07-20 18:09:34.935231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.142 [2024-07-20 18:09:34.935244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.142 [2024-07-20 18:09:34.935274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.142 qpair failed and we were unable to recover it. 00:34:00.401 [2024-07-20 18:09:34.944892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.401 [2024-07-20 18:09:34.945108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.401 [2024-07-20 18:09:34.945134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.401 [2024-07-20 18:09:34.945149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.401 [2024-07-20 18:09:34.945162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.401 [2024-07-20 18:09:34.945193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.401 qpair failed and we were unable to recover it. 00:34:00.401 [2024-07-20 18:09:34.954964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.401 [2024-07-20 18:09:34.955173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.401 [2024-07-20 18:09:34.955199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.401 [2024-07-20 18:09:34.955214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.401 [2024-07-20 18:09:34.955227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.401 [2024-07-20 18:09:34.955259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.401 qpair failed and we were unable to recover it. 00:34:00.401 [2024-07-20 18:09:34.964928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.401 [2024-07-20 18:09:34.965134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.401 [2024-07-20 18:09:34.965160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:34.965175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:34.965188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:34.965219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:34.974959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:34.975169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:34.975195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:34.975209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:34.975222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:34.975253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:34.984988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:34.985204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:34.985229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:34.985243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:34.985257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:34.985287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:34.995052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:34.995299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:34.995324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:34.995339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:34.995352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:34.995382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:35.005084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:35.005296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:35.005321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:35.005336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:35.005349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:35.005380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:35.015065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:35.015273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:35.015299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:35.015314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:35.015333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:35.015363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:35.025109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:35.025322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:35.025347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:35.025362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:35.025376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:35.025405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:35.035148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:35.035444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:35.035470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:35.035485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:35.035498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:35.035527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:35.045258] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:35.045465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:35.045490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:35.045505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:35.045518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:35.045549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:35.055225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:35.055471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:35.055497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:35.055511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:35.055524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:35.055555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:35.065230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:35.065440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:35.065466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:35.065480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:35.065493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:35.065524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:35.075268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:35.075476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:35.075502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:35.075516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:35.075529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:35.075560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:35.085324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:35.085532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:35.085558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:35.085573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:35.085586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:35.085619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:35.095311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:35.095512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:35.095538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:35.095552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.402 [2024-07-20 18:09:35.095565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.402 [2024-07-20 18:09:35.095596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.402 qpair failed and we were unable to recover it. 00:34:00.402 [2024-07-20 18:09:35.105430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.402 [2024-07-20 18:09:35.105653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.402 [2024-07-20 18:09:35.105679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.402 [2024-07-20 18:09:35.105702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.403 [2024-07-20 18:09:35.105716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.403 [2024-07-20 18:09:35.105746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.403 qpair failed and we were unable to recover it. 00:34:00.403 [2024-07-20 18:09:35.115375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.403 [2024-07-20 18:09:35.115579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.403 [2024-07-20 18:09:35.115605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.403 [2024-07-20 18:09:35.115619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.403 [2024-07-20 18:09:35.115632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.403 [2024-07-20 18:09:35.115665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.403 qpair failed and we were unable to recover it. 00:34:00.403 [2024-07-20 18:09:35.125399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.403 [2024-07-20 18:09:35.125620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.403 [2024-07-20 18:09:35.125645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.403 [2024-07-20 18:09:35.125660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.403 [2024-07-20 18:09:35.125673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.403 [2024-07-20 18:09:35.125704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.403 qpair failed and we were unable to recover it. 00:34:00.403 [2024-07-20 18:09:35.135430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.403 [2024-07-20 18:09:35.135645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.403 [2024-07-20 18:09:35.135671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.403 [2024-07-20 18:09:35.135686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.403 [2024-07-20 18:09:35.135699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.403 [2024-07-20 18:09:35.135729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.403 qpair failed and we were unable to recover it. 00:34:00.403 [2024-07-20 18:09:35.145522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.403 [2024-07-20 18:09:35.145769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.403 [2024-07-20 18:09:35.145801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.403 [2024-07-20 18:09:35.145817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.403 [2024-07-20 18:09:35.145830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.403 [2024-07-20 18:09:35.145862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.403 qpair failed and we were unable to recover it. 00:34:00.403 [2024-07-20 18:09:35.155496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.403 [2024-07-20 18:09:35.155710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.403 [2024-07-20 18:09:35.155735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.403 [2024-07-20 18:09:35.155750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.403 [2024-07-20 18:09:35.155763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5844000b90 00:34:00.403 [2024-07-20 18:09:35.155801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:00.403 qpair failed and we were unable to recover it. 00:34:00.403 [2024-07-20 18:09:35.155904] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:00.403 A controller has encountered a failure and is being reset. 00:34:00.403 [2024-07-20 18:09:35.155974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22050f0 (9): Bad file descriptor 00:34:00.403 Controller properly reset. 00:34:00.662 Initializing NVMe Controllers 00:34:00.662 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:00.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:00.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:00.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:00.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:00.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:00.662 Initialization complete. Launching workers. 00:34:00.662 Starting thread on core 1 00:34:00.662 Starting thread on core 2 00:34:00.662 Starting thread on core 3 00:34:00.662 Starting thread on core 0 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:00.662 00:34:00.662 real 0m10.790s 00:34:00.662 user 0m15.536s 00:34:00.662 sys 0m6.049s 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:00.662 ************************************ 00:34:00.662 END TEST nvmf_target_disconnect_tc2 00:34:00.662 ************************************ 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:00.662 rmmod nvme_tcp 00:34:00.662 rmmod nvme_fabrics 00:34:00.662 rmmod nvme_keyring 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1104075 ']' 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1104075 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1104075 ']' 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 1104075 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1104075 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1104075' 00:34:00.662 killing process with pid 1104075 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 1104075 00:34:00.662 18:09:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 1104075 00:34:00.921 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:00.921 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:00.921 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:00.921 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:00.921 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:00.921 18:09:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.921 18:09:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:00.921 18:09:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.819 18:09:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:02.819 00:34:02.819 real 0m15.324s 00:34:02.819 user 0m41.563s 00:34:02.819 sys 0m7.922s 00:34:02.819 18:09:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:02.819 18:09:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:02.819 ************************************ 00:34:02.819 END TEST nvmf_target_disconnect 00:34:02.819 ************************************ 00:34:02.819 18:09:37 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:02.819 18:09:37 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:02.819 18:09:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:03.078 18:09:37 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:03.078 00:34:03.078 real 26m52.471s 00:34:03.078 user 73m18.743s 00:34:03.078 sys 6m20.160s 00:34:03.078 18:09:37 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:03.078 18:09:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:03.078 ************************************ 00:34:03.078 END TEST nvmf_tcp 00:34:03.078 ************************************ 00:34:03.078 18:09:37 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:03.078 18:09:37 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:03.078 18:09:37 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:03.078 18:09:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:03.078 18:09:37 -- common/autotest_common.sh@10 -- # set +x 00:34:03.078 ************************************ 00:34:03.078 START TEST spdkcli_nvmf_tcp 00:34:03.078 ************************************ 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:03.078 * Looking for test storage... 00:34:03.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1105268 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1105268 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 1105268 ']' 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:03.078 18:09:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:03.078 [2024-07-20 18:09:37.767747] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:03.078 [2024-07-20 18:09:37.767857] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1105268 ] 00:34:03.078 EAL: No free 2048 kB hugepages reported on node 1 00:34:03.078 [2024-07-20 18:09:37.840969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:03.336 [2024-07-20 18:09:37.941340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:03.336 [2024-07-20 18:09:37.941349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.336 18:09:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:03.336 18:09:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:03.336 18:09:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:03.336 18:09:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:03.336 18:09:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:03.336 18:09:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:03.336 18:09:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:03.336 18:09:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:03.336 18:09:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:03.336 18:09:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:03.336 18:09:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:03.336 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:03.336 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:03.336 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:03.336 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:03.336 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:03.336 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:03.336 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:03.336 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:03.336 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:03.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:03.336 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:03.336 ' 00:34:05.861 [2024-07-20 18:09:40.615008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:07.230 [2024-07-20 18:09:41.855333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:09.754 [2024-07-20 18:09:44.138449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:11.647 [2024-07-20 18:09:46.108696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:13.018 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:13.018 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:13.018 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:13.018 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:13.018 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:13.018 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:13.018 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:13.018 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:13.018 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:13.018 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:13.018 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:13.018 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:13.018 18:09:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:13.018 18:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:13.018 18:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.019 18:09:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:13.019 18:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:13.019 18:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.019 18:09:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:13.019 18:09:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:13.633 18:09:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:13.633 18:09:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:13.633 18:09:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:13.633 18:09:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:13.633 18:09:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.633 18:09:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:13.633 18:09:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:13.633 18:09:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.633 18:09:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:13.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:13.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:13.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:13.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:13.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:13.634 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:13.634 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:13.634 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:13.634 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:13.634 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:13.634 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:13.634 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:13.634 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:13.634 ' 00:34:18.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:18.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:18.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:18.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:18.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:18.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:18.896 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:18.896 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:18.896 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:18.896 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:18.896 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:18.896 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:18.896 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:18.896 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1105268 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1105268 ']' 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1105268 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1105268 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1105268' 00:34:18.896 killing process with pid 1105268 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 1105268 00:34:18.896 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 1105268 00:34:19.154 18:09:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:19.154 18:09:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:19.154 18:09:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1105268 ']' 00:34:19.154 18:09:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1105268 00:34:19.154 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1105268 ']' 00:34:19.154 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1105268 00:34:19.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1105268) - No such process 00:34:19.154 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 1105268 is not found' 00:34:19.154 Process with pid 1105268 is not found 00:34:19.154 18:09:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:19.154 18:09:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:19.154 18:09:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:19.154 00:34:19.154 real 0m16.124s 00:34:19.154 user 0m34.153s 00:34:19.154 sys 0m0.811s 00:34:19.154 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:19.154 18:09:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:19.154 ************************************ 00:34:19.154 END TEST spdkcli_nvmf_tcp 00:34:19.154 ************************************ 00:34:19.155 18:09:53 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:19.155 18:09:53 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:19.155 18:09:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:19.155 18:09:53 -- common/autotest_common.sh@10 -- # set +x 00:34:19.155 ************************************ 00:34:19.155 START TEST nvmf_identify_passthru 00:34:19.155 ************************************ 00:34:19.155 18:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:19.155 * Looking for test storage... 00:34:19.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:19.155 18:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.155 18:09:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.155 18:09:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.155 18:09:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.155 18:09:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.155 18:09:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.155 18:09:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.155 18:09:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:19.155 18:09:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:19.155 18:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.155 18:09:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.155 18:09:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.155 18:09:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.155 18:09:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.155 18:09:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.155 18:09:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.155 18:09:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:19.155 18:09:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.155 18:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.155 18:09:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:19.155 18:09:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:19.155 18:09:53 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:19.155 18:09:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:21.690 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:21.690 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:21.690 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:21.690 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:21.690 18:09:55 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:21.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:21.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:34:21.691 00:34:21.691 --- 10.0.0.2 ping statistics --- 00:34:21.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.691 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:21.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:21.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:34:21.691 00:34:21.691 --- 10.0.0.1 ping statistics --- 00:34:21.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.691 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:21.691 18:09:56 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:21.691 18:09:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.691 18:09:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:34:21.691 18:09:56 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:34:21.691 18:09:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:21.691 18:09:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:21.691 18:09:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:21.691 18:09:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:21.691 18:09:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:21.691 EAL: No free 2048 kB hugepages reported on node 1 00:34:25.878 18:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:25.878 18:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:25.878 18:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:25.878 18:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:25.878 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.063 18:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:30.063 18:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:30.063 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:30.063 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:30.063 18:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:30.063 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:30.063 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:30.063 18:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1109882 00:34:30.063 18:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:30.063 18:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:30.063 18:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1109882 00:34:30.063 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 1109882 ']' 00:34:30.063 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.063 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:30.063 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.063 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:30.063 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:30.063 [2024-07-20 18:10:04.730130] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:30.063 [2024-07-20 18:10:04.730215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.063 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.063 [2024-07-20 18:10:04.798923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:30.322 [2024-07-20 18:10:04.890558] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.322 [2024-07-20 18:10:04.890617] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.322 [2024-07-20 18:10:04.890634] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.322 [2024-07-20 18:10:04.890657] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.322 [2024-07-20 18:10:04.890670] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.322 [2024-07-20 18:10:04.891021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.322 [2024-07-20 18:10:04.891046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:30.322 [2024-07-20 18:10:04.891067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:30.322 [2024-07-20 18:10:04.891069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.322 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:30.322 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:30.322 18:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:30.322 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.322 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:30.322 INFO: Log level set to 20 00:34:30.322 INFO: Requests: 00:34:30.322 { 00:34:30.322 "jsonrpc": "2.0", 00:34:30.322 "method": "nvmf_set_config", 00:34:30.322 "id": 1, 00:34:30.322 "params": { 00:34:30.322 "admin_cmd_passthru": { 00:34:30.322 "identify_ctrlr": true 00:34:30.322 } 00:34:30.322 } 00:34:30.322 } 00:34:30.322 00:34:30.322 INFO: response: 00:34:30.322 { 00:34:30.322 "jsonrpc": "2.0", 00:34:30.322 "id": 1, 00:34:30.322 "result": true 00:34:30.322 } 00:34:30.322 00:34:30.322 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.322 18:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:30.322 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.322 18:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:30.322 INFO: Setting log level to 20 00:34:30.322 INFO: Setting log level to 20 00:34:30.322 INFO: Log level set to 20 00:34:30.322 INFO: Log level set to 20 00:34:30.322 INFO: Requests: 00:34:30.322 { 00:34:30.322 "jsonrpc": "2.0", 00:34:30.322 "method": "framework_start_init", 00:34:30.322 "id": 1 00:34:30.322 } 00:34:30.322 00:34:30.322 INFO: Requests: 00:34:30.322 { 00:34:30.322 "jsonrpc": "2.0", 00:34:30.322 "method": "framework_start_init", 00:34:30.322 "id": 1 00:34:30.322 } 00:34:30.322 00:34:30.322 [2024-07-20 18:10:05.086146] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:30.322 INFO: response: 00:34:30.322 { 00:34:30.322 "jsonrpc": "2.0", 00:34:30.322 "id": 1, 00:34:30.322 "result": true 00:34:30.322 } 00:34:30.322 00:34:30.322 INFO: response: 00:34:30.322 { 00:34:30.322 "jsonrpc": "2.0", 00:34:30.322 "id": 1, 00:34:30.322 "result": true 00:34:30.322 } 00:34:30.322 00:34:30.322 18:10:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.322 18:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:30.322 18:10:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.322 18:10:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:30.322 INFO: Setting log level to 40 00:34:30.322 INFO: Setting log level to 40 00:34:30.322 INFO: Setting log level to 40 00:34:30.322 [2024-07-20 18:10:05.096214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.322 18:10:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.322 18:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:30.322 18:10:05 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:30.322 18:10:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:30.580 18:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:30.580 18:10:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.580 18:10:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:33.856 Nvme0n1 00:34:33.856 18:10:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.856 18:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:33.856 18:10:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.856 18:10:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:33.856 18:10:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.856 18:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:33.856 18:10:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.856 18:10:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:33.856 18:10:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.856 18:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:33.856 18:10:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.856 18:10:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:33.856 [2024-07-20 18:10:07.988121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.856 18:10:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.856 18:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:33.856 18:10:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.856 18:10:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:33.856 [ 00:34:33.856 { 00:34:33.856 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:33.856 "subtype": "Discovery", 00:34:33.856 "listen_addresses": [], 00:34:33.857 "allow_any_host": true, 00:34:33.857 "hosts": [] 00:34:33.857 }, 00:34:33.857 { 00:34:33.857 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:33.857 "subtype": "NVMe", 00:34:33.857 "listen_addresses": [ 00:34:33.857 { 00:34:33.857 "trtype": "TCP", 00:34:33.857 "adrfam": "IPv4", 00:34:33.857 "traddr": "10.0.0.2", 00:34:33.857 "trsvcid": "4420" 00:34:33.857 } 00:34:33.857 ], 00:34:33.857 "allow_any_host": true, 00:34:33.857 "hosts": [], 00:34:33.857 "serial_number": "SPDK00000000000001", 00:34:33.857 "model_number": "SPDK bdev Controller", 00:34:33.857 "max_namespaces": 1, 00:34:33.857 "min_cntlid": 1, 00:34:33.857 "max_cntlid": 65519, 00:34:33.857 "namespaces": [ 00:34:33.857 { 00:34:33.857 "nsid": 1, 00:34:33.857 "bdev_name": "Nvme0n1", 00:34:33.857 "name": "Nvme0n1", 00:34:33.857 "nguid": "1E8BCB12E7754ACFBB57567728DF3281", 00:34:33.857 "uuid": "1e8bcb12-e775-4acf-bb57-567728df3281" 00:34:33.857 } 00:34:33.857 ] 00:34:33.857 } 00:34:33.857 ] 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.857 18:10:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:33.857 18:10:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:33.857 18:10:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:33.857 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.857 18:10:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:33.857 18:10:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:33.857 18:10:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:33.857 18:10:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:33.857 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.857 18:10:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:33.857 18:10:08 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:33.857 18:10:08 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:33.857 18:10:08 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.857 18:10:08 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:33.857 18:10:08 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:33.857 18:10:08 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:33.857 18:10:08 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:33.857 18:10:08 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:33.857 18:10:08 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:33.857 18:10:08 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:33.857 18:10:08 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:33.857 rmmod nvme_tcp 00:34:33.857 rmmod nvme_fabrics 00:34:33.857 rmmod nvme_keyring 00:34:33.857 18:10:08 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:33.857 18:10:08 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:33.857 18:10:08 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:33.857 18:10:08 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1109882 ']' 00:34:33.857 18:10:08 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1109882 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 1109882 ']' 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 1109882 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1109882 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1109882' 00:34:33.857 killing process with pid 1109882 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 1109882 00:34:33.857 18:10:08 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 1109882 00:34:35.227 18:10:10 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:35.227 18:10:10 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:35.227 18:10:10 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:35.227 18:10:10 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:35.227 18:10:10 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:35.227 18:10:10 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.227 18:10:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:35.227 18:10:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.803 18:10:12 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:37.803 00:34:37.803 real 0m18.228s 00:34:37.803 user 0m26.984s 00:34:37.803 sys 0m2.463s 00:34:37.803 18:10:12 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:37.803 18:10:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:37.803 ************************************ 00:34:37.803 END TEST nvmf_identify_passthru 00:34:37.803 ************************************ 00:34:37.803 18:10:12 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:37.803 18:10:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:37.803 18:10:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:37.803 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:34:37.803 ************************************ 00:34:37.803 START TEST nvmf_dif 00:34:37.803 ************************************ 00:34:37.803 18:10:12 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:37.803 * Looking for test storage... 00:34:37.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:37.803 18:10:12 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:37.803 18:10:12 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:37.803 18:10:12 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:37.803 18:10:12 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:37.803 18:10:12 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:37.804 18:10:12 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.804 18:10:12 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.804 18:10:12 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.804 18:10:12 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:37.804 18:10:12 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:37.804 18:10:12 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:37.804 18:10:12 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:37.804 18:10:12 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:37.804 18:10:12 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:37.804 18:10:12 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.804 18:10:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:37.804 18:10:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:37.804 18:10:12 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:37.804 18:10:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:39.178 18:10:13 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:39.179 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:39.179 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:39.179 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:39.179 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:39.179 18:10:13 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:39.436 18:10:13 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:39.436 18:10:14 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:39.436 18:10:14 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:39.436 18:10:14 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:39.436 18:10:14 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:39.436 18:10:14 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:39.436 18:10:14 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:39.436 18:10:14 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:39.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:39.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:34:39.436 00:34:39.436 --- 10.0.0.2 ping statistics --- 00:34:39.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.436 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:34:39.436 18:10:14 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:39.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:39.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:34:39.436 00:34:39.436 --- 10.0.0.1 ping statistics --- 00:34:39.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.436 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:34:39.436 18:10:14 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:39.436 18:10:14 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:39.436 18:10:14 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:39.436 18:10:14 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:40.808 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:40.808 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:40.808 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:40.808 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:40.808 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:40.808 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:40.808 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:40.808 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:40.808 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:40.808 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:40.808 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:40.808 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:40.808 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:40.808 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:40.808 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:40.808 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:40.808 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:40.808 18:10:15 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:40.808 18:10:15 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:40.808 18:10:15 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:40.808 18:10:15 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:40.808 18:10:15 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:40.808 18:10:15 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:40.808 18:10:15 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:40.808 18:10:15 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:40.808 18:10:15 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:40.808 18:10:15 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:40.808 18:10:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:40.808 18:10:15 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1113024 00:34:40.808 18:10:15 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:40.808 18:10:15 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1113024 00:34:40.808 18:10:15 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 1113024 ']' 00:34:40.808 18:10:15 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.808 18:10:15 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:40.808 18:10:15 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.808 18:10:15 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:40.808 18:10:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:40.808 [2024-07-20 18:10:15.437774] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:40.808 [2024-07-20 18:10:15.437880] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:40.808 EAL: No free 2048 kB hugepages reported on node 1 00:34:40.808 [2024-07-20 18:10:15.503444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.808 [2024-07-20 18:10:15.591525] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:40.808 [2024-07-20 18:10:15.591582] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:40.808 [2024-07-20 18:10:15.591595] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:40.808 [2024-07-20 18:10:15.591606] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:40.808 [2024-07-20 18:10:15.591616] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:40.808 [2024-07-20 18:10:15.591660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.066 18:10:15 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:41.066 18:10:15 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:34:41.066 18:10:15 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:41.066 18:10:15 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:41.066 18:10:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:41.066 18:10:15 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.066 18:10:15 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:41.066 18:10:15 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:41.066 18:10:15 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.066 18:10:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:41.066 [2024-07-20 18:10:15.728673] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.066 18:10:15 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.066 18:10:15 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:41.066 18:10:15 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:41.066 18:10:15 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:41.066 18:10:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:41.066 ************************************ 00:34:41.066 START TEST fio_dif_1_default 00:34:41.066 ************************************ 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.066 bdev_null0 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.066 [2024-07-20 18:10:15.785004] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:41.066 { 00:34:41.066 "params": { 00:34:41.066 "name": "Nvme$subsystem", 00:34:41.066 "trtype": "$TEST_TRANSPORT", 00:34:41.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.066 "adrfam": "ipv4", 00:34:41.066 "trsvcid": "$NVMF_PORT", 00:34:41.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.066 "hdgst": ${hdgst:-false}, 00:34:41.066 "ddgst": ${ddgst:-false} 00:34:41.066 }, 00:34:41.066 "method": "bdev_nvme_attach_controller" 00:34:41.066 } 00:34:41.066 EOF 00:34:41.066 )") 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:41.066 "params": { 00:34:41.066 "name": "Nvme0", 00:34:41.066 "trtype": "tcp", 00:34:41.066 "traddr": "10.0.0.2", 00:34:41.066 "adrfam": "ipv4", 00:34:41.066 "trsvcid": "4420", 00:34:41.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:41.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:41.066 "hdgst": false, 00:34:41.066 "ddgst": false 00:34:41.066 }, 00:34:41.066 "method": "bdev_nvme_attach_controller" 00:34:41.066 }' 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:41.066 18:10:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.325 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:41.325 fio-3.35 00:34:41.325 Starting 1 thread 00:34:41.325 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.539 00:34:53.539 filename0: (groupid=0, jobs=1): err= 0: pid=1113252: Sat Jul 20 18:10:26 2024 00:34:53.539 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10002msec) 00:34:53.539 slat (nsec): min=4857, max=36048, avg=9678.36, stdev=3099.85 00:34:53.539 clat (usec): min=41828, max=45880, avg=41994.59, stdev=269.27 00:34:53.539 lat (usec): min=41837, max=45896, avg=42004.27, stdev=269.21 00:34:53.539 clat percentiles (usec): 00:34:53.539 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:34:53.539 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:53.539 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:53.539 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:34:53.539 | 99.99th=[45876] 00:34:53.539 bw ( KiB/s): min= 352, max= 384, per=99.81%, avg=380.63, stdev=10.09, samples=19 00:34:53.539 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:34:53.539 lat (msec) : 50=100.00% 00:34:53.539 cpu : usr=89.39%, sys=10.31%, ctx=24, majf=0, minf=237 00:34:53.539 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.539 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.539 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:53.539 00:34:53.539 Run status group 0 (all jobs): 00:34:53.539 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10002-10002msec 00:34:53.539 18:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:53.539 18:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:53.539 18:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:53.539 18:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:53.539 18:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:53.539 18:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.540 00:34:53.540 real 0m10.975s 00:34:53.540 user 0m10.045s 00:34:53.540 sys 0m1.298s 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:53.540 ************************************ 00:34:53.540 END TEST fio_dif_1_default 00:34:53.540 ************************************ 00:34:53.540 18:10:26 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:53.540 18:10:26 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:53.540 18:10:26 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:53.540 18:10:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:53.540 ************************************ 00:34:53.540 START TEST fio_dif_1_multi_subsystems 00:34:53.540 ************************************ 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.540 bdev_null0 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.540 [2024-07-20 18:10:26.815778] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.540 bdev_null1 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:53.540 { 00:34:53.540 "params": { 00:34:53.540 "name": "Nvme$subsystem", 00:34:53.540 "trtype": "$TEST_TRANSPORT", 00:34:53.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.540 "adrfam": "ipv4", 00:34:53.540 "trsvcid": "$NVMF_PORT", 00:34:53.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.540 "hdgst": ${hdgst:-false}, 00:34:53.540 "ddgst": ${ddgst:-false} 00:34:53.540 }, 00:34:53.540 "method": "bdev_nvme_attach_controller" 00:34:53.540 } 00:34:53.540 EOF 00:34:53.540 )") 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:53.540 { 00:34:53.540 "params": { 00:34:53.540 "name": "Nvme$subsystem", 00:34:53.540 "trtype": "$TEST_TRANSPORT", 00:34:53.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.540 "adrfam": "ipv4", 00:34:53.540 "trsvcid": "$NVMF_PORT", 00:34:53.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.540 "hdgst": ${hdgst:-false}, 00:34:53.540 "ddgst": ${ddgst:-false} 00:34:53.540 }, 00:34:53.540 "method": "bdev_nvme_attach_controller" 00:34:53.540 } 00:34:53.540 EOF 00:34:53.540 )") 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:53.540 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:53.540 "params": { 00:34:53.540 "name": "Nvme0", 00:34:53.540 "trtype": "tcp", 00:34:53.540 "traddr": "10.0.0.2", 00:34:53.540 "adrfam": "ipv4", 00:34:53.540 "trsvcid": "4420", 00:34:53.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:53.540 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:53.540 "hdgst": false, 00:34:53.540 "ddgst": false 00:34:53.540 }, 00:34:53.540 "method": "bdev_nvme_attach_controller" 00:34:53.540 },{ 00:34:53.540 "params": { 00:34:53.540 "name": "Nvme1", 00:34:53.540 "trtype": "tcp", 00:34:53.540 "traddr": "10.0.0.2", 00:34:53.540 "adrfam": "ipv4", 00:34:53.541 "trsvcid": "4420", 00:34:53.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:53.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:53.541 "hdgst": false, 00:34:53.541 "ddgst": false 00:34:53.541 }, 00:34:53.541 "method": "bdev_nvme_attach_controller" 00:34:53.541 }' 00:34:53.541 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:34:53.541 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:34:53.541 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:34:53.541 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.541 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:34:53.541 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:34:53.541 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:34:53.541 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:34:53.541 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:53.541 18:10:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.541 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:53.541 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:53.541 fio-3.35 00:34:53.541 Starting 2 threads 00:34:53.541 EAL: No free 2048 kB hugepages reported on node 1 00:35:03.524 00:35:03.524 filename0: (groupid=0, jobs=1): err= 0: pid=1114647: Sat Jul 20 18:10:37 2024 00:35:03.524 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10041msec) 00:35:03.524 slat (nsec): min=6978, max=82471, avg=9048.81, stdev=4353.42 00:35:03.524 clat (usec): min=41754, max=43879, avg=41983.90, stdev=142.01 00:35:03.524 lat (usec): min=41762, max=43907, avg=41992.95, stdev=142.48 00:35:03.524 clat percentiles (usec): 00:35:03.524 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:35:03.524 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:03.524 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:03.524 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:35:03.524 | 99.99th=[43779] 00:35:03.524 bw ( KiB/s): min= 352, max= 384, per=40.70%, avg=380.80, stdev= 9.85, samples=20 00:35:03.524 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:35:03.524 lat (msec) : 50=100.00% 00:35:03.524 cpu : usr=94.21%, sys=5.50%, ctx=22, majf=0, minf=188 00:35:03.524 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:03.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.524 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.525 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:03.525 filename1: (groupid=0, jobs=1): err= 0: pid=1114648: Sat Jul 20 18:10:37 2024 00:35:03.525 read: IOPS=138, BW=554KiB/s (567kB/s)(5552KiB/10027msec) 00:35:03.525 slat (nsec): min=7084, max=41423, avg=9044.46, stdev=3421.60 00:35:03.525 clat (usec): min=1183, max=42860, avg=28867.79, stdev=18900.89 00:35:03.525 lat (usec): min=1190, max=42901, avg=28876.83, stdev=18900.82 00:35:03.525 clat percentiles (usec): 00:35:03.525 | 1.00th=[ 1221], 5.00th=[ 1254], 10.00th=[ 1287], 20.00th=[ 1336], 00:35:03.525 | 30.00th=[ 1385], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:03.525 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:03.525 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:03.525 | 99.99th=[42730] 00:35:03.525 bw ( KiB/s): min= 352, max= 768, per=59.22%, avg=553.60, stdev=175.61, samples=20 00:35:03.525 iops : min= 88, max= 192, avg=138.40, stdev=43.90, samples=20 00:35:03.525 lat (msec) : 2=31.99%, 50=68.01% 00:35:03.525 cpu : usr=94.61%, sys=5.09%, ctx=8, majf=0, minf=127 00:35:03.525 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:03.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.525 issued rwts: total=1388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.525 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:03.525 00:35:03.525 Run status group 0 (all jobs): 00:35:03.525 READ: bw=934KiB/s (956kB/s), 381KiB/s-554KiB/s (390kB/s-567kB/s), io=9376KiB (9601kB), run=10027-10041msec 00:35:03.525 18:10:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:03.525 18:10:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:03.525 18:10:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:03.525 18:10:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:03.525 18:10:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:03.525 18:10:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:03.525 18:10:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.525 18:10:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.525 18:10:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.525 18:10:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:03.525 18:10:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.525 18:10:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.525 00:35:03.525 real 0m11.237s 00:35:03.525 user 0m20.081s 00:35:03.525 sys 0m1.379s 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:03.525 18:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:03.525 ************************************ 00:35:03.525 END TEST fio_dif_1_multi_subsystems 00:35:03.525 ************************************ 00:35:03.525 18:10:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:03.525 18:10:38 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:03.525 18:10:38 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:03.525 18:10:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:03.525 ************************************ 00:35:03.525 START TEST fio_dif_rand_params 00:35:03.525 ************************************ 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:03.525 bdev_null0 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:03.525 [2024-07-20 18:10:38.094587] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:03.525 { 00:35:03.525 "params": { 00:35:03.525 "name": "Nvme$subsystem", 00:35:03.525 "trtype": "$TEST_TRANSPORT", 00:35:03.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:03.525 "adrfam": "ipv4", 00:35:03.525 "trsvcid": "$NVMF_PORT", 00:35:03.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:03.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:03.525 "hdgst": ${hdgst:-false}, 00:35:03.525 "ddgst": ${ddgst:-false} 00:35:03.525 }, 00:35:03.525 "method": "bdev_nvme_attach_controller" 00:35:03.525 } 00:35:03.525 EOF 00:35:03.525 )") 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:03.525 18:10:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:03.525 "params": { 00:35:03.525 "name": "Nvme0", 00:35:03.525 "trtype": "tcp", 00:35:03.525 "traddr": "10.0.0.2", 00:35:03.526 "adrfam": "ipv4", 00:35:03.526 "trsvcid": "4420", 00:35:03.526 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:03.526 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:03.526 "hdgst": false, 00:35:03.526 "ddgst": false 00:35:03.526 }, 00:35:03.526 "method": "bdev_nvme_attach_controller" 00:35:03.526 }' 00:35:03.526 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:03.526 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:03.526 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.526 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.526 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:03.526 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:03.526 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:03.526 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:03.526 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:03.526 18:10:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.783 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:03.783 ... 00:35:03.783 fio-3.35 00:35:03.783 Starting 3 threads 00:35:03.783 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.384 00:35:10.384 filename0: (groupid=0, jobs=1): err= 0: pid=1116055: Sat Jul 20 18:10:43 2024 00:35:10.384 read: IOPS=138, BW=17.3MiB/s (18.2MB/s)(87.6MiB/5051msec) 00:35:10.384 slat (nsec): min=7320, max=67768, avg=11473.87, stdev=4183.12 00:35:10.384 clat (usec): min=7910, max=59181, avg=21524.20, stdev=17315.26 00:35:10.384 lat (usec): min=7922, max=59199, avg=21535.67, stdev=17315.24 00:35:10.384 clat percentiles (usec): 00:35:10.384 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11076], 00:35:10.384 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12780], 60.00th=[14091], 00:35:10.384 | 70.00th=[14746], 80.00th=[51643], 90.00th=[53740], 95.00th=[55313], 00:35:10.384 | 99.00th=[56361], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:35:10.384 | 99.99th=[58983] 00:35:10.384 bw ( KiB/s): min=10752, max=23808, per=30.10%, avg=17894.40, stdev=4066.47, samples=10 00:35:10.384 iops : min= 84, max= 186, avg=139.80, stdev=31.77, samples=10 00:35:10.384 lat (msec) : 10=6.42%, 20=71.47%, 100=22.11% 00:35:10.384 cpu : usr=90.38%, sys=8.65%, ctx=26, majf=0, minf=136 00:35:10.384 IO depths : 1=5.7%, 2=94.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.384 issued rwts: total=701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:10.384 filename0: (groupid=0, jobs=1): err= 0: pid=1116056: Sat Jul 20 18:10:43 2024 00:35:10.384 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(104MiB/5009msec) 00:35:10.384 slat (nsec): min=5010, max=34398, avg=11961.42, stdev=3221.39 00:35:10.384 clat (usec): min=7673, max=93370, avg=18128.27, stdev=15607.64 00:35:10.384 lat (usec): min=7685, max=93384, avg=18140.24, stdev=15607.48 00:35:10.384 clat percentiles (usec): 00:35:10.384 | 1.00th=[ 8160], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10421], 00:35:10.384 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11994], 60.00th=[12518], 00:35:10.384 | 70.00th=[13304], 80.00th=[14746], 90.00th=[52691], 95.00th=[54264], 00:35:10.384 | 99.00th=[56361], 99.50th=[57934], 99.90th=[93848], 99.95th=[93848], 00:35:10.384 | 99.99th=[93848] 00:35:10.384 bw ( KiB/s): min=13056, max=28160, per=35.52%, avg=21120.00, stdev=4013.84, samples=10 00:35:10.384 iops : min= 102, max= 220, avg=165.00, stdev=31.36, samples=10 00:35:10.384 lat (msec) : 10=14.73%, 20=70.05%, 50=0.36%, 100=14.86% 00:35:10.384 cpu : usr=90.10%, sys=9.01%, ctx=21, majf=0, minf=61 00:35:10.384 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.384 issued rwts: total=828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:10.384 filename0: (groupid=0, jobs=1): err= 0: pid=1116057: Sat Jul 20 18:10:43 2024 00:35:10.384 read: IOPS=163, BW=20.4MiB/s (21.4MB/s)(102MiB/5009msec) 00:35:10.384 slat (nsec): min=7363, max=36974, avg=12664.07, stdev=3713.24 00:35:10.384 clat (usec): min=8566, max=97223, avg=18369.13, stdev=15881.86 00:35:10.384 lat (usec): min=8578, max=97238, avg=18381.80, stdev=15881.72 00:35:10.384 clat percentiles (usec): 00:35:10.384 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10421], 00:35:10.384 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11994], 60.00th=[12911], 00:35:10.384 | 70.00th=[13698], 80.00th=[15139], 90.00th=[53216], 95.00th=[54789], 00:35:10.384 | 99.00th=[57410], 99.50th=[58459], 99.90th=[96994], 99.95th=[96994], 00:35:10.384 | 99.99th=[96994] 00:35:10.384 bw ( KiB/s): min=16640, max=26624, per=35.06%, avg=20843.10, stdev=3541.79, samples=10 00:35:10.384 iops : min= 130, max= 208, avg=162.80, stdev=27.64, samples=10 00:35:10.384 lat (msec) : 10=13.34%, 20=71.73%, 100=14.93% 00:35:10.384 cpu : usr=90.58%, sys=8.77%, ctx=10, majf=0, minf=127 00:35:10.384 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.384 issued rwts: total=817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:10.384 00:35:10.384 Run status group 0 (all jobs): 00:35:10.384 READ: bw=58.1MiB/s (60.9MB/s), 17.3MiB/s-20.7MiB/s (18.2MB/s-21.7MB/s), io=293MiB (307MB), run=5009-5051msec 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.384 bdev_null0 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.384 [2024-07-20 18:10:44.243523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.384 bdev_null1 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.384 bdev_null2 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.384 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:10.385 { 00:35:10.385 "params": { 00:35:10.385 "name": "Nvme$subsystem", 00:35:10.385 "trtype": "$TEST_TRANSPORT", 00:35:10.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.385 "adrfam": "ipv4", 00:35:10.385 "trsvcid": "$NVMF_PORT", 00:35:10.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.385 "hdgst": ${hdgst:-false}, 00:35:10.385 "ddgst": ${ddgst:-false} 00:35:10.385 }, 00:35:10.385 "method": "bdev_nvme_attach_controller" 00:35:10.385 } 00:35:10.385 EOF 00:35:10.385 )") 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:10.385 { 00:35:10.385 "params": { 00:35:10.385 "name": "Nvme$subsystem", 00:35:10.385 "trtype": "$TEST_TRANSPORT", 00:35:10.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.385 "adrfam": "ipv4", 00:35:10.385 "trsvcid": "$NVMF_PORT", 00:35:10.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.385 "hdgst": ${hdgst:-false}, 00:35:10.385 "ddgst": ${ddgst:-false} 00:35:10.385 }, 00:35:10.385 "method": "bdev_nvme_attach_controller" 00:35:10.385 } 00:35:10.385 EOF 00:35:10.385 )") 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:10.385 { 00:35:10.385 "params": { 00:35:10.385 "name": "Nvme$subsystem", 00:35:10.385 "trtype": "$TEST_TRANSPORT", 00:35:10.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.385 "adrfam": "ipv4", 00:35:10.385 "trsvcid": "$NVMF_PORT", 00:35:10.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.385 "hdgst": ${hdgst:-false}, 00:35:10.385 "ddgst": ${ddgst:-false} 00:35:10.385 }, 00:35:10.385 "method": "bdev_nvme_attach_controller" 00:35:10.385 } 00:35:10.385 EOF 00:35:10.385 )") 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:10.385 "params": { 00:35:10.385 "name": "Nvme0", 00:35:10.385 "trtype": "tcp", 00:35:10.385 "traddr": "10.0.0.2", 00:35:10.385 "adrfam": "ipv4", 00:35:10.385 "trsvcid": "4420", 00:35:10.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.385 "hdgst": false, 00:35:10.385 "ddgst": false 00:35:10.385 }, 00:35:10.385 "method": "bdev_nvme_attach_controller" 00:35:10.385 },{ 00:35:10.385 "params": { 00:35:10.385 "name": "Nvme1", 00:35:10.385 "trtype": "tcp", 00:35:10.385 "traddr": "10.0.0.2", 00:35:10.385 "adrfam": "ipv4", 00:35:10.385 "trsvcid": "4420", 00:35:10.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:10.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:10.385 "hdgst": false, 00:35:10.385 "ddgst": false 00:35:10.385 }, 00:35:10.385 "method": "bdev_nvme_attach_controller" 00:35:10.385 },{ 00:35:10.385 "params": { 00:35:10.385 "name": "Nvme2", 00:35:10.385 "trtype": "tcp", 00:35:10.385 "traddr": "10.0.0.2", 00:35:10.385 "adrfam": "ipv4", 00:35:10.385 "trsvcid": "4420", 00:35:10.385 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:10.385 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:10.385 "hdgst": false, 00:35:10.385 "ddgst": false 00:35:10.385 }, 00:35:10.385 "method": "bdev_nvme_attach_controller" 00:35:10.385 }' 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:10.385 18:10:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.385 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:10.385 ... 00:35:10.385 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:10.385 ... 00:35:10.385 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:10.385 ... 00:35:10.385 fio-3.35 00:35:10.385 Starting 24 threads 00:35:10.385 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.662 00:35:22.662 filename0: (groupid=0, jobs=1): err= 0: pid=1116917: Sat Jul 20 18:10:55 2024 00:35:22.662 read: IOPS=81, BW=327KiB/s (335kB/s)(3280KiB/10017msec) 00:35:22.662 slat (usec): min=4, max=452, avg=30.45, stdev=49.54 00:35:22.662 clat (msec): min=69, max=337, avg=195.19, stdev=50.53 00:35:22.662 lat (msec): min=69, max=337, avg=195.22, stdev=50.55 00:35:22.662 clat percentiles (msec): 00:35:22.662 | 1.00th=[ 70], 5.00th=[ 120], 10.00th=[ 127], 20.00th=[ 157], 00:35:22.662 | 30.00th=[ 165], 40.00th=[ 182], 50.00th=[ 201], 60.00th=[ 211], 00:35:22.662 | 70.00th=[ 230], 80.00th=[ 245], 90.00th=[ 251], 95.00th=[ 271], 00:35:22.662 | 99.00th=[ 296], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:35:22.662 | 99.99th=[ 338] 00:35:22.662 bw ( KiB/s): min= 256, max= 384, per=5.10%, avg=321.60, stdev=57.08, samples=20 00:35:22.662 iops : min= 64, max= 96, avg=80.40, stdev=14.27, samples=20 00:35:22.662 lat (msec) : 100=3.90%, 250=83.66%, 500=12.44% 00:35:22.662 cpu : usr=96.55%, sys=2.20%, ctx=62, majf=0, minf=33 00:35:22.662 IO depths : 1=2.2%, 2=6.5%, 4=18.8%, 8=62.1%, 16=10.5%, 32=0.0%, >=64=0.0% 00:35:22.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.662 complete : 0=0.0%, 4=92.4%, 8=2.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.662 issued rwts: total=820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.662 filename0: (groupid=0, jobs=1): err= 0: pid=1116918: Sat Jul 20 18:10:55 2024 00:35:22.662 read: IOPS=66, BW=264KiB/s (270kB/s)(2688KiB/10181msec) 00:35:22.662 slat (usec): min=8, max=159, avg=23.84, stdev=19.21 00:35:22.662 clat (msec): min=126, max=372, avg=242.19, stdev=43.76 00:35:22.662 lat (msec): min=126, max=372, avg=242.21, stdev=43.76 00:35:22.662 clat percentiles (msec): 00:35:22.662 | 1.00th=[ 132], 5.00th=[ 140], 10.00th=[ 180], 20.00th=[ 215], 00:35:22.662 | 30.00th=[ 226], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 259], 00:35:22.662 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 284], 95.00th=[ 300], 00:35:22.662 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 372], 99.95th=[ 372], 00:35:22.662 | 99.99th=[ 372] 00:35:22.662 bw ( KiB/s): min= 128, max= 384, per=4.16%, avg=262.40, stdev=77.42, samples=20 00:35:22.662 iops : min= 32, max= 96, avg=65.60, stdev=19.35, samples=20 00:35:22.662 lat (msec) : 250=41.37%, 500=58.63% 00:35:22.662 cpu : usr=96.63%, sys=2.07%, ctx=17, majf=0, minf=15 00:35:22.662 IO depths : 1=4.6%, 2=10.6%, 4=24.7%, 8=52.2%, 16=7.9%, 32=0.0%, >=64=0.0% 00:35:22.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.662 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.662 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.662 filename0: (groupid=0, jobs=1): err= 0: pid=1116919: Sat Jul 20 18:10:55 2024 00:35:22.662 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10173msec) 00:35:22.662 slat (usec): min=8, max=211, avg=32.21, stdev=17.18 00:35:22.662 clat (msec): min=128, max=456, avg=260.53, stdev=43.41 00:35:22.662 lat (msec): min=128, max=456, avg=260.56, stdev=43.40 00:35:22.662 clat percentiles (msec): 00:35:22.662 | 1.00th=[ 144], 5.00th=[ 186], 10.00th=[ 209], 20.00th=[ 239], 00:35:22.662 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 264], 60.00th=[ 271], 00:35:22.662 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 296], 95.00th=[ 313], 00:35:22.662 | 99.00th=[ 397], 99.50th=[ 430], 99.90th=[ 456], 99.95th=[ 456], 00:35:22.662 | 99.99th=[ 456] 00:35:22.662 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=243.20, stdev=55.57, samples=20 00:35:22.662 iops : min= 32, max= 96, avg=60.80, stdev=13.89, samples=20 00:35:22.662 lat (msec) : 250=32.69%, 500=67.31% 00:35:22.662 cpu : usr=97.23%, sys=1.78%, ctx=40, majf=0, minf=19 00:35:22.662 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:22.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.662 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.663 filename0: (groupid=0, jobs=1): err= 0: pid=1116920: Sat Jul 20 18:10:55 2024 00:35:22.663 read: IOPS=62, BW=251KiB/s (257kB/s)(2560KiB/10182msec) 00:35:22.663 slat (usec): min=11, max=104, avg=34.55, stdev=14.71 00:35:22.663 clat (msec): min=138, max=331, avg=254.28, stdev=33.40 00:35:22.663 lat (msec): min=138, max=331, avg=254.32, stdev=33.40 00:35:22.663 clat percentiles (msec): 00:35:22.663 | 1.00th=[ 140], 5.00th=[ 192], 10.00th=[ 218], 20.00th=[ 228], 00:35:22.663 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 257], 60.00th=[ 264], 00:35:22.663 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 292], 95.00th=[ 300], 00:35:22.663 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 334], 99.95th=[ 334], 00:35:22.663 | 99.99th=[ 334] 00:35:22.663 bw ( KiB/s): min= 128, max= 368, per=3.95%, avg=249.60, stdev=59.05, samples=20 00:35:22.663 iops : min= 32, max= 92, avg=62.40, stdev=14.76, samples=20 00:35:22.663 lat (msec) : 250=38.12%, 500=61.88% 00:35:22.663 cpu : usr=98.29%, sys=1.21%, ctx=30, majf=0, minf=20 00:35:22.663 IO depths : 1=1.7%, 2=8.0%, 4=25.0%, 8=54.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:35:22.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.663 filename0: (groupid=0, jobs=1): err= 0: pid=1116921: Sat Jul 20 18:10:55 2024 00:35:22.663 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10178msec) 00:35:22.663 slat (usec): min=7, max=226, avg=31.03, stdev=13.46 00:35:22.663 clat (msec): min=200, max=337, avg=260.54, stdev=26.84 00:35:22.663 lat (msec): min=200, max=337, avg=260.57, stdev=26.83 00:35:22.663 clat percentiles (msec): 00:35:22.663 | 1.00th=[ 207], 5.00th=[ 207], 10.00th=[ 224], 20.00th=[ 241], 00:35:22.663 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 271], 00:35:22.663 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 300], 95.00th=[ 300], 00:35:22.663 | 99.00th=[ 321], 99.50th=[ 330], 99.90th=[ 338], 99.95th=[ 338], 00:35:22.663 | 99.99th=[ 338] 00:35:22.663 bw ( KiB/s): min= 128, max= 256, per=3.86%, avg=243.20, stdev=36.93, samples=20 00:35:22.663 iops : min= 32, max= 64, avg=60.80, stdev= 9.23, samples=20 00:35:22.663 lat (msec) : 250=28.21%, 500=71.79% 00:35:22.663 cpu : usr=98.12%, sys=1.40%, ctx=20, majf=0, minf=20 00:35:22.663 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:22.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.663 filename0: (groupid=0, jobs=1): err= 0: pid=1116922: Sat Jul 20 18:10:55 2024 00:35:22.663 read: IOPS=70, BW=283KiB/s (290kB/s)(2872KiB/10146msec) 00:35:22.663 slat (usec): min=8, max=100, avg=35.18, stdev=27.38 00:35:22.663 clat (msec): min=68, max=423, avg=225.42, stdev=61.99 00:35:22.663 lat (msec): min=68, max=423, avg=225.45, stdev=62.00 00:35:22.663 clat percentiles (msec): 00:35:22.663 | 1.00th=[ 68], 5.00th=[ 125], 10.00th=[ 136], 20.00th=[ 167], 00:35:22.663 | 30.00th=[ 207], 40.00th=[ 228], 50.00th=[ 241], 60.00th=[ 253], 00:35:22.663 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 305], 00:35:22.663 | 99.00th=[ 380], 99.50th=[ 414], 99.90th=[ 422], 99.95th=[ 422], 00:35:22.663 | 99.99th=[ 422] 00:35:22.663 bw ( KiB/s): min= 128, max= 384, per=4.45%, avg=280.80, stdev=75.31, samples=20 00:35:22.663 iops : min= 32, max= 96, avg=70.20, stdev=18.83, samples=20 00:35:22.663 lat (msec) : 100=4.46%, 250=54.60%, 500=40.95% 00:35:22.663 cpu : usr=97.07%, sys=1.71%, ctx=51, majf=0, minf=26 00:35:22.663 IO depths : 1=2.5%, 2=8.8%, 4=25.1%, 8=53.8%, 16=9.9%, 32=0.0%, >=64=0.0% 00:35:22.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 issued rwts: total=718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.663 filename0: (groupid=0, jobs=1): err= 0: pid=1116923: Sat Jul 20 18:10:55 2024 00:35:22.663 read: IOPS=61, BW=246KiB/s (251kB/s)(2496KiB/10164msec) 00:35:22.663 slat (usec): min=10, max=277, avg=42.46, stdev=27.33 00:35:22.663 clat (msec): min=206, max=303, avg=260.22, stdev=24.51 00:35:22.663 lat (msec): min=207, max=303, avg=260.26, stdev=24.50 00:35:22.663 clat percentiles (msec): 00:35:22.663 | 1.00th=[ 207], 5.00th=[ 218], 10.00th=[ 222], 20.00th=[ 241], 00:35:22.663 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 271], 00:35:22.663 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 296], 95.00th=[ 300], 00:35:22.663 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:35:22.663 | 99.99th=[ 305] 00:35:22.663 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=243.20, stdev=57.24, samples=20 00:35:22.663 iops : min= 32, max= 96, avg=60.80, stdev=14.31, samples=20 00:35:22.663 lat (msec) : 250=33.33%, 500=66.67% 00:35:22.663 cpu : usr=96.87%, sys=1.72%, ctx=33, majf=0, minf=18 00:35:22.663 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:22.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.663 filename0: (groupid=0, jobs=1): err= 0: pid=1116924: Sat Jul 20 18:10:55 2024 00:35:22.663 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10182msec) 00:35:22.663 slat (usec): min=8, max=438, avg=38.06, stdev=39.81 00:35:22.663 clat (msec): min=186, max=311, avg=260.76, stdev=26.04 00:35:22.663 lat (msec): min=186, max=311, avg=260.80, stdev=26.04 00:35:22.663 clat percentiles (msec): 00:35:22.663 | 1.00th=[ 201], 5.00th=[ 207], 10.00th=[ 224], 20.00th=[ 241], 00:35:22.663 | 30.00th=[ 253], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 271], 00:35:22.663 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 300], 95.00th=[ 300], 00:35:22.663 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 313], 99.95th=[ 313], 00:35:22.663 | 99.99th=[ 313] 00:35:22.663 bw ( KiB/s): min= 128, max= 272, per=3.86%, avg=243.20, stdev=39.74, samples=20 00:35:22.663 iops : min= 32, max= 68, avg=60.80, stdev= 9.93, samples=20 00:35:22.663 lat (msec) : 250=26.28%, 500=73.72% 00:35:22.663 cpu : usr=94.65%, sys=2.74%, ctx=85, majf=0, minf=24 00:35:22.663 IO depths : 1=4.6%, 2=9.6%, 4=23.9%, 8=54.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:35:22.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.663 filename1: (groupid=0, jobs=1): err= 0: pid=1116925: Sat Jul 20 18:10:55 2024 00:35:22.663 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10172msec) 00:35:22.663 slat (usec): min=8, max=171, avg=36.77, stdev=18.56 00:35:22.663 clat (msec): min=154, max=398, avg=260.48, stdev=29.49 00:35:22.663 lat (msec): min=154, max=398, avg=260.51, stdev=29.49 00:35:22.663 clat percentiles (msec): 00:35:22.663 | 1.00th=[ 209], 5.00th=[ 218], 10.00th=[ 222], 20.00th=[ 241], 00:35:22.663 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 271], 00:35:22.663 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 300], 00:35:22.663 | 99.00th=[ 305], 99.50th=[ 384], 99.90th=[ 397], 99.95th=[ 397], 00:35:22.663 | 99.99th=[ 397] 00:35:22.663 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=243.20, stdev=57.48, samples=20 00:35:22.663 iops : min= 32, max= 96, avg=60.80, stdev=14.37, samples=20 00:35:22.663 lat (msec) : 250=34.29%, 500=65.71% 00:35:22.663 cpu : usr=96.49%, sys=1.95%, ctx=53, majf=0, minf=23 00:35:22.663 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:22.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.663 filename1: (groupid=0, jobs=1): err= 0: pid=1116926: Sat Jul 20 18:10:55 2024 00:35:22.663 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10183msec) 00:35:22.663 slat (nsec): min=9061, max=62888, avg=29452.86, stdev=10645.08 00:35:22.663 clat (msec): min=186, max=353, avg=260.80, stdev=28.80 00:35:22.663 lat (msec): min=186, max=353, avg=260.83, stdev=28.80 00:35:22.663 clat percentiles (msec): 00:35:22.663 | 1.00th=[ 190], 5.00th=[ 207], 10.00th=[ 224], 20.00th=[ 241], 00:35:22.663 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 271], 00:35:22.663 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 300], 95.00th=[ 305], 00:35:22.663 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 355], 99.95th=[ 355], 00:35:22.663 | 99.99th=[ 355] 00:35:22.663 bw ( KiB/s): min= 128, max= 288, per=3.86%, avg=243.20, stdev=36.56, samples=20 00:35:22.663 iops : min= 32, max= 72, avg=60.80, stdev= 9.14, samples=20 00:35:22.663 lat (msec) : 250=29.17%, 500=70.83% 00:35:22.663 cpu : usr=97.86%, sys=1.72%, ctx=20, majf=0, minf=24 00:35:22.663 IO depths : 1=1.1%, 2=2.2%, 4=18.8%, 8=66.5%, 16=11.4%, 32=0.0%, >=64=0.0% 00:35:22.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.663 filename1: (groupid=0, jobs=1): err= 0: pid=1116927: Sat Jul 20 18:10:55 2024 00:35:22.663 read: IOPS=88, BW=353KiB/s (362kB/s)(3584KiB/10146msec) 00:35:22.663 slat (usec): min=6, max=252, avg=16.74, stdev=14.65 00:35:22.663 clat (msec): min=68, max=275, avg=181.04, stdev=45.13 00:35:22.663 lat (msec): min=68, max=275, avg=181.05, stdev=45.13 00:35:22.663 clat percentiles (msec): 00:35:22.663 | 1.00th=[ 69], 5.00th=[ 116], 10.00th=[ 127], 20.00th=[ 144], 00:35:22.663 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 180], 60.00th=[ 197], 00:35:22.663 | 70.00th=[ 207], 80.00th=[ 224], 90.00th=[ 241], 95.00th=[ 259], 00:35:22.663 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:35:22.663 | 99.99th=[ 275] 00:35:22.663 bw ( KiB/s): min= 144, max= 512, per=5.59%, avg=352.00, stdev=76.82, samples=20 00:35:22.663 iops : min= 36, max= 128, avg=88.00, stdev=19.21, samples=20 00:35:22.663 lat (msec) : 100=3.57%, 250=89.51%, 500=6.92% 00:35:22.663 cpu : usr=97.57%, sys=1.74%, ctx=48, majf=0, minf=29 00:35:22.663 IO depths : 1=2.7%, 2=8.5%, 4=24.3%, 8=54.7%, 16=9.8%, 32=0.0%, >=64=0.0% 00:35:22.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.663 issued rwts: total=896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.664 filename1: (groupid=0, jobs=1): err= 0: pid=1116928: Sat Jul 20 18:10:55 2024 00:35:22.664 read: IOPS=64, BW=258KiB/s (264kB/s)(2624KiB/10187msec) 00:35:22.664 slat (nsec): min=8427, max=87002, avg=26444.93, stdev=17629.12 00:35:22.664 clat (msec): min=116, max=366, avg=248.14, stdev=40.25 00:35:22.664 lat (msec): min=116, max=366, avg=248.16, stdev=40.24 00:35:22.664 clat percentiles (msec): 00:35:22.664 | 1.00th=[ 140], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 211], 00:35:22.664 | 30.00th=[ 230], 40.00th=[ 243], 50.00th=[ 257], 60.00th=[ 262], 00:35:22.664 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 300], 00:35:22.664 | 99.00th=[ 363], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:35:22.664 | 99.99th=[ 368] 00:35:22.664 bw ( KiB/s): min= 128, max= 384, per=4.07%, avg=256.00, stdev=55.43, samples=20 00:35:22.664 iops : min= 32, max= 96, avg=64.00, stdev=13.86, samples=20 00:35:22.664 lat (msec) : 250=44.82%, 500=55.18% 00:35:22.664 cpu : usr=98.39%, sys=1.22%, ctx=19, majf=0, minf=24 00:35:22.664 IO depths : 1=3.5%, 2=9.6%, 4=24.5%, 8=53.4%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:22.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.664 filename1: (groupid=0, jobs=1): err= 0: pid=1116929: Sat Jul 20 18:10:55 2024 00:35:22.664 read: IOPS=61, BW=245KiB/s (250kB/s)(2488KiB/10172msec) 00:35:22.664 slat (nsec): min=8170, max=98480, avg=32887.38, stdev=15303.23 00:35:22.664 clat (msec): min=131, max=456, avg=261.27, stdev=45.40 00:35:22.664 lat (msec): min=131, max=456, avg=261.30, stdev=45.40 00:35:22.664 clat percentiles (msec): 00:35:22.664 | 1.00th=[ 144], 5.00th=[ 182], 10.00th=[ 209], 20.00th=[ 239], 00:35:22.664 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 264], 60.00th=[ 271], 00:35:22.664 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 296], 95.00th=[ 321], 00:35:22.664 | 99.00th=[ 422], 99.50th=[ 443], 99.90th=[ 456], 99.95th=[ 456], 00:35:22.664 | 99.99th=[ 456] 00:35:22.664 bw ( KiB/s): min= 128, max= 368, per=3.84%, avg=242.40, stdev=53.77, samples=20 00:35:22.664 iops : min= 32, max= 92, avg=60.60, stdev=13.44, samples=20 00:35:22.664 lat (msec) : 250=33.12%, 500=66.88% 00:35:22.664 cpu : usr=95.22%, sys=2.55%, ctx=87, majf=0, minf=26 00:35:22.664 IO depths : 1=3.4%, 2=9.6%, 4=25.1%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:22.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.664 filename1: (groupid=0, jobs=1): err= 0: pid=1116930: Sat Jul 20 18:10:55 2024 00:35:22.664 read: IOPS=62, BW=251KiB/s (257kB/s)(2560KiB/10181msec) 00:35:22.664 slat (usec): min=4, max=241, avg=45.10, stdev=32.31 00:35:22.664 clat (msec): min=138, max=347, avg=254.17, stdev=35.26 00:35:22.664 lat (msec): min=138, max=347, avg=254.21, stdev=35.27 00:35:22.664 clat percentiles (msec): 00:35:22.664 | 1.00th=[ 140], 5.00th=[ 174], 10.00th=[ 218], 20.00th=[ 226], 00:35:22.664 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 257], 60.00th=[ 266], 00:35:22.664 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 292], 95.00th=[ 300], 00:35:22.664 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 347], 99.95th=[ 347], 00:35:22.664 | 99.99th=[ 347] 00:35:22.664 bw ( KiB/s): min= 128, max= 384, per=3.95%, avg=249.60, stdev=63.87, samples=20 00:35:22.664 iops : min= 32, max= 96, avg=62.40, stdev=15.97, samples=20 00:35:22.664 lat (msec) : 250=37.50%, 500=62.50% 00:35:22.664 cpu : usr=96.49%, sys=1.97%, ctx=42, majf=0, minf=19 00:35:22.664 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:35:22.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.664 filename1: (groupid=0, jobs=1): err= 0: pid=1116931: Sat Jul 20 18:10:55 2024 00:35:22.664 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10175msec) 00:35:22.664 slat (nsec): min=8742, max=66929, avg=30577.02, stdev=12764.12 00:35:22.664 clat (msec): min=153, max=398, avg=260.53, stdev=35.78 00:35:22.664 lat (msec): min=153, max=398, avg=260.56, stdev=35.78 00:35:22.664 clat percentiles (msec): 00:35:22.664 | 1.00th=[ 159], 5.00th=[ 209], 10.00th=[ 220], 20.00th=[ 239], 00:35:22.664 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 259], 60.00th=[ 271], 00:35:22.664 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 305], 00:35:22.664 | 99.00th=[ 380], 99.50th=[ 393], 99.90th=[ 397], 99.95th=[ 397], 00:35:22.664 | 99.99th=[ 397] 00:35:22.664 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=243.20, stdev=57.48, samples=20 00:35:22.664 iops : min= 32, max= 96, avg=60.80, stdev=14.37, samples=20 00:35:22.664 lat (msec) : 250=36.22%, 500=63.78% 00:35:22.664 cpu : usr=98.52%, sys=1.10%, ctx=17, majf=0, minf=21 00:35:22.664 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:22.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.664 filename1: (groupid=0, jobs=1): err= 0: pid=1116932: Sat Jul 20 18:10:55 2024 00:35:22.664 read: IOPS=74, BW=296KiB/s (304kB/s)(3008KiB/10146msec) 00:35:22.664 slat (usec): min=5, max=446, avg=42.82, stdev=51.88 00:35:22.664 clat (msec): min=68, max=357, avg=213.65, stdev=52.33 00:35:22.664 lat (msec): min=68, max=357, avg=213.70, stdev=52.35 00:35:22.664 clat percentiles (msec): 00:35:22.664 | 1.00th=[ 69], 5.00th=[ 130], 10.00th=[ 161], 20.00th=[ 171], 00:35:22.664 | 30.00th=[ 184], 40.00th=[ 207], 50.00th=[ 224], 60.00th=[ 236], 00:35:22.664 | 70.00th=[ 249], 80.00th=[ 257], 90.00th=[ 271], 95.00th=[ 288], 00:35:22.664 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 359], 99.95th=[ 359], 00:35:22.664 | 99.99th=[ 359] 00:35:22.664 bw ( KiB/s): min= 128, max= 384, per=4.73%, avg=298.40, stdev=68.53, samples=20 00:35:22.664 iops : min= 32, max= 96, avg=74.60, stdev=17.13, samples=20 00:35:22.664 lat (msec) : 100=4.26%, 250=68.88%, 500=26.86% 00:35:22.664 cpu : usr=96.20%, sys=2.31%, ctx=95, majf=0, minf=24 00:35:22.664 IO depths : 1=2.5%, 2=7.3%, 4=20.3%, 8=59.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:35:22.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 complete : 0=0.0%, 4=92.8%, 8=1.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.664 filename2: (groupid=0, jobs=1): err= 0: pid=1116933: Sat Jul 20 18:10:55 2024 00:35:22.664 read: IOPS=61, BW=246KiB/s (251kB/s)(2496KiB/10165msec) 00:35:22.664 slat (nsec): min=8568, max=67643, avg=32203.47, stdev=12723.03 00:35:22.664 clat (msec): min=207, max=303, avg=260.34, stdev=24.36 00:35:22.664 lat (msec): min=207, max=303, avg=260.38, stdev=24.36 00:35:22.664 clat percentiles (msec): 00:35:22.664 | 1.00th=[ 209], 5.00th=[ 218], 10.00th=[ 222], 20.00th=[ 241], 00:35:22.664 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 271], 00:35:22.664 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 296], 95.00th=[ 300], 00:35:22.664 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:35:22.664 | 99.99th=[ 305] 00:35:22.664 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=243.20, stdev=57.24, samples=20 00:35:22.664 iops : min= 32, max= 96, avg=60.80, stdev=14.31, samples=20 00:35:22.664 lat (msec) : 250=33.33%, 500=66.67% 00:35:22.664 cpu : usr=98.35%, sys=1.27%, ctx=18, majf=0, minf=21 00:35:22.664 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:22.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.664 filename2: (groupid=0, jobs=1): err= 0: pid=1116934: Sat Jul 20 18:10:55 2024 00:35:22.664 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10175msec) 00:35:22.664 slat (usec): min=5, max=101, avg=32.82, stdev=11.06 00:35:22.664 clat (msec): min=192, max=332, avg=260.50, stdev=28.03 00:35:22.664 lat (msec): min=192, max=332, avg=260.53, stdev=28.03 00:35:22.664 clat percentiles (msec): 00:35:22.664 | 1.00th=[ 201], 5.00th=[ 207], 10.00th=[ 224], 20.00th=[ 234], 00:35:22.664 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 271], 00:35:22.664 | 70.00th=[ 279], 80.00th=[ 284], 90.00th=[ 300], 95.00th=[ 305], 00:35:22.664 | 99.00th=[ 317], 99.50th=[ 334], 99.90th=[ 334], 99.95th=[ 334], 00:35:22.664 | 99.99th=[ 334] 00:35:22.664 bw ( KiB/s): min= 128, max= 256, per=3.86%, avg=243.20, stdev=36.93, samples=20 00:35:22.664 iops : min= 32, max= 64, avg=60.80, stdev= 9.23, samples=20 00:35:22.664 lat (msec) : 250=28.53%, 500=71.47% 00:35:22.664 cpu : usr=97.81%, sys=1.50%, ctx=160, majf=0, minf=21 00:35:22.664 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:22.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.664 filename2: (groupid=0, jobs=1): err= 0: pid=1116935: Sat Jul 20 18:10:55 2024 00:35:22.664 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10177msec) 00:35:22.664 slat (usec): min=7, max=276, avg=54.66, stdev=43.88 00:35:22.664 clat (msec): min=121, max=455, avg=260.53, stdev=45.05 00:35:22.664 lat (msec): min=121, max=455, avg=260.59, stdev=45.05 00:35:22.664 clat percentiles (msec): 00:35:22.664 | 1.00th=[ 140], 5.00th=[ 186], 10.00th=[ 209], 20.00th=[ 239], 00:35:22.664 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 264], 60.00th=[ 271], 00:35:22.664 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 296], 95.00th=[ 321], 00:35:22.664 | 99.00th=[ 418], 99.50th=[ 435], 99.90th=[ 456], 99.95th=[ 456], 00:35:22.664 | 99.99th=[ 456] 00:35:22.664 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=243.20, stdev=55.57, samples=20 00:35:22.664 iops : min= 32, max= 96, avg=60.80, stdev=13.89, samples=20 00:35:22.664 lat (msec) : 250=33.01%, 500=66.99% 00:35:22.664 cpu : usr=96.47%, sys=2.02%, ctx=43, majf=0, minf=22 00:35:22.664 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:22.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.664 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.665 filename2: (groupid=0, jobs=1): err= 0: pid=1116936: Sat Jul 20 18:10:55 2024 00:35:22.665 read: IOPS=64, BW=257KiB/s (263kB/s)(2624KiB/10206msec) 00:35:22.665 slat (usec): min=11, max=240, avg=34.04, stdev=34.20 00:35:22.665 clat (msec): min=68, max=345, avg=248.49, stdev=50.99 00:35:22.665 lat (msec): min=68, max=345, avg=248.53, stdev=50.99 00:35:22.665 clat percentiles (msec): 00:35:22.665 | 1.00th=[ 69], 5.00th=[ 165], 10.00th=[ 207], 20.00th=[ 226], 00:35:22.665 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 266], 00:35:22.665 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 300], 95.00th=[ 305], 00:35:22.665 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 347], 00:35:22.665 | 99.99th=[ 347] 00:35:22.665 bw ( KiB/s): min= 128, max= 384, per=4.07%, avg=256.00, stdev=57.10, samples=20 00:35:22.665 iops : min= 32, max= 96, avg=64.00, stdev=14.28, samples=20 00:35:22.665 lat (msec) : 100=4.88%, 250=29.57%, 500=65.55% 00:35:22.665 cpu : usr=95.65%, sys=2.41%, ctx=223, majf=0, minf=22 00:35:22.665 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:22.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.665 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.665 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.665 filename2: (groupid=0, jobs=1): err= 0: pid=1116937: Sat Jul 20 18:10:55 2024 00:35:22.665 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10170msec) 00:35:22.665 slat (nsec): min=8277, max=99724, avg=36513.17, stdev=22589.39 00:35:22.665 clat (msec): min=131, max=415, avg=260.46, stdev=41.62 00:35:22.665 lat (msec): min=131, max=415, avg=260.50, stdev=41.62 00:35:22.665 clat percentiles (msec): 00:35:22.665 | 1.00th=[ 133], 5.00th=[ 190], 10.00th=[ 218], 20.00th=[ 239], 00:35:22.665 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 264], 60.00th=[ 271], 00:35:22.665 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 296], 95.00th=[ 313], 00:35:22.665 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 418], 00:35:22.665 | 99.99th=[ 418] 00:35:22.665 bw ( KiB/s): min= 128, max= 384, per=3.86%, avg=243.20, stdev=55.81, samples=20 00:35:22.665 iops : min= 32, max= 96, avg=60.80, stdev=13.95, samples=20 00:35:22.665 lat (msec) : 250=34.62%, 500=65.38% 00:35:22.665 cpu : usr=98.23%, sys=1.34%, ctx=14, majf=0, minf=26 00:35:22.665 IO depths : 1=1.9%, 2=7.9%, 4=24.0%, 8=55.6%, 16=10.6%, 32=0.0%, >=64=0.0% 00:35:22.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.665 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.665 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.665 filename2: (groupid=0, jobs=1): err= 0: pid=1116938: Sat Jul 20 18:10:55 2024 00:35:22.665 read: IOPS=85, BW=341KiB/s (349kB/s)(3456KiB/10146msec) 00:35:22.665 slat (usec): min=7, max=217, avg=21.06, stdev=22.70 00:35:22.665 clat (msec): min=61, max=300, avg=185.60, stdev=45.67 00:35:22.665 lat (msec): min=61, max=300, avg=185.62, stdev=45.68 00:35:22.665 clat percentiles (msec): 00:35:22.665 | 1.00th=[ 68], 5.00th=[ 100], 10.00th=[ 127], 20.00th=[ 155], 00:35:22.665 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 188], 60.00th=[ 207], 00:35:22.665 | 70.00th=[ 222], 80.00th=[ 228], 90.00th=[ 243], 95.00th=[ 249], 00:35:22.665 | 99.00th=[ 266], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:35:22.665 | 99.99th=[ 300] 00:35:22.665 bw ( KiB/s): min= 256, max= 384, per=5.38%, avg=339.20, stdev=56.29, samples=20 00:35:22.665 iops : min= 64, max= 96, avg=84.80, stdev=14.07, samples=20 00:35:22.665 lat (msec) : 100=5.32%, 250=91.20%, 500=3.47% 00:35:22.665 cpu : usr=97.83%, sys=1.44%, ctx=35, majf=0, minf=33 00:35:22.665 IO depths : 1=1.3%, 2=6.7%, 4=22.5%, 8=58.3%, 16=11.2%, 32=0.0%, >=64=0.0% 00:35:22.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.665 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.665 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.665 filename2: (groupid=0, jobs=1): err= 0: pid=1116939: Sat Jul 20 18:10:55 2024 00:35:22.665 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10181msec) 00:35:22.665 slat (nsec): min=7872, max=43426, avg=15444.62, stdev=7763.92 00:35:22.665 clat (msec): min=98, max=410, avg=259.69, stdev=47.35 00:35:22.665 lat (msec): min=98, max=410, avg=259.71, stdev=47.35 00:35:22.665 clat percentiles (msec): 00:35:22.665 | 1.00th=[ 128], 5.00th=[ 192], 10.00th=[ 205], 20.00th=[ 230], 00:35:22.665 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 266], 60.00th=[ 271], 00:35:22.665 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 313], 95.00th=[ 342], 00:35:22.665 | 99.00th=[ 397], 99.50th=[ 401], 99.90th=[ 409], 99.95th=[ 409], 00:35:22.665 | 99.99th=[ 409] 00:35:22.665 bw ( KiB/s): min= 128, max= 368, per=3.86%, avg=243.20, stdev=51.81, samples=20 00:35:22.665 iops : min= 32, max= 92, avg=60.80, stdev=12.95, samples=20 00:35:22.665 lat (msec) : 100=0.32%, 250=30.77%, 500=68.91% 00:35:22.665 cpu : usr=98.31%, sys=1.32%, ctx=15, majf=0, minf=31 00:35:22.665 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:22.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.665 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.665 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.665 filename2: (groupid=0, jobs=1): err= 0: pid=1116940: Sat Jul 20 18:10:55 2024 00:35:22.665 read: IOPS=62, BW=251KiB/s (257kB/s)(2560KiB/10181msec) 00:35:22.665 slat (usec): min=8, max=384, avg=32.96, stdev=34.40 00:35:22.665 clat (msec): min=139, max=303, avg=254.24, stdev=34.71 00:35:22.665 lat (msec): min=139, max=303, avg=254.27, stdev=34.71 00:35:22.665 clat percentiles (msec): 00:35:22.665 | 1.00th=[ 140], 5.00th=[ 174], 10.00th=[ 209], 20.00th=[ 226], 00:35:22.665 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 266], 00:35:22.665 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 300], 00:35:22.665 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:35:22.665 | 99.99th=[ 305] 00:35:22.665 bw ( KiB/s): min= 128, max= 384, per=3.95%, avg=249.60, stdev=50.44, samples=20 00:35:22.665 iops : min= 32, max= 96, avg=62.40, stdev=12.61, samples=20 00:35:22.665 lat (msec) : 250=29.69%, 500=70.31% 00:35:22.665 cpu : usr=96.46%, sys=2.17%, ctx=49, majf=0, minf=20 00:35:22.665 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:22.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.665 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.665 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:22.665 00:35:22.665 Run status group 0 (all jobs): 00:35:22.665 READ: bw=6296KiB/s (6447kB/s), 245KiB/s-353KiB/s (250kB/s-362kB/s), io=62.8MiB (65.8MB), run=10017-10206msec 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:22.665 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.666 bdev_null0 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.666 [2024-07-20 18:10:56.207481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.666 bdev_null1 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:22.666 { 00:35:22.666 "params": { 00:35:22.666 "name": "Nvme$subsystem", 00:35:22.666 "trtype": "$TEST_TRANSPORT", 00:35:22.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.666 "adrfam": "ipv4", 00:35:22.666 "trsvcid": "$NVMF_PORT", 00:35:22.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.666 "hdgst": ${hdgst:-false}, 00:35:22.666 "ddgst": ${ddgst:-false} 00:35:22.666 }, 00:35:22.666 "method": "bdev_nvme_attach_controller" 00:35:22.666 } 00:35:22.666 EOF 00:35:22.666 )") 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:22.666 { 00:35:22.666 "params": { 00:35:22.666 "name": "Nvme$subsystem", 00:35:22.666 "trtype": "$TEST_TRANSPORT", 00:35:22.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.666 "adrfam": "ipv4", 00:35:22.666 "trsvcid": "$NVMF_PORT", 00:35:22.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.666 "hdgst": ${hdgst:-false}, 00:35:22.666 "ddgst": ${ddgst:-false} 00:35:22.666 }, 00:35:22.666 "method": "bdev_nvme_attach_controller" 00:35:22.666 } 00:35:22.666 EOF 00:35:22.666 )") 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:22.666 "params": { 00:35:22.666 "name": "Nvme0", 00:35:22.666 "trtype": "tcp", 00:35:22.666 "traddr": "10.0.0.2", 00:35:22.666 "adrfam": "ipv4", 00:35:22.666 "trsvcid": "4420", 00:35:22.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:22.666 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:22.666 "hdgst": false, 00:35:22.666 "ddgst": false 00:35:22.666 }, 00:35:22.666 "method": "bdev_nvme_attach_controller" 00:35:22.666 },{ 00:35:22.666 "params": { 00:35:22.666 "name": "Nvme1", 00:35:22.666 "trtype": "tcp", 00:35:22.666 "traddr": "10.0.0.2", 00:35:22.666 "adrfam": "ipv4", 00:35:22.666 "trsvcid": "4420", 00:35:22.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:22.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:22.666 "hdgst": false, 00:35:22.666 "ddgst": false 00:35:22.666 }, 00:35:22.666 "method": "bdev_nvme_attach_controller" 00:35:22.666 }' 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:22.666 18:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.666 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:22.666 ... 00:35:22.666 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:22.666 ... 00:35:22.666 fio-3.35 00:35:22.666 Starting 4 threads 00:35:22.666 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.926 00:35:27.926 filename0: (groupid=0, jobs=1): err= 0: pid=1118318: Sat Jul 20 18:11:02 2024 00:35:27.926 read: IOPS=1504, BW=11.8MiB/s (12.3MB/s)(58.8MiB/5003msec) 00:35:27.926 slat (nsec): min=3933, max=63665, avg=12807.60, stdev=5544.15 00:35:27.926 clat (usec): min=1892, max=12351, avg=5277.63, stdev=1275.95 00:35:27.926 lat (usec): min=1907, max=12359, avg=5290.43, stdev=1275.81 00:35:27.926 clat percentiles (usec): 00:35:27.926 | 1.00th=[ 3064], 5.00th=[ 3490], 10.00th=[ 3752], 20.00th=[ 4113], 00:35:27.926 | 30.00th=[ 4490], 40.00th=[ 4752], 50.00th=[ 5080], 60.00th=[ 5473], 00:35:27.926 | 70.00th=[ 5866], 80.00th=[ 6390], 90.00th=[ 6980], 95.00th=[ 7504], 00:35:27.926 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[10421], 99.95th=[10552], 00:35:27.926 | 99.99th=[12387] 00:35:27.926 bw ( KiB/s): min=11744, max=12723, per=19.84%, avg=12039.44, stdev=347.71, samples=9 00:35:27.926 iops : min= 1468, max= 1590, avg=1504.89, stdev=43.37, samples=9 00:35:27.926 lat (msec) : 2=0.01%, 4=15.96%, 10=83.88%, 20=0.15% 00:35:27.926 cpu : usr=87.23%, sys=8.84%, ctx=569, majf=0, minf=9 00:35:27.926 IO depths : 1=0.4%, 2=3.5%, 4=68.8%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.926 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.926 issued rwts: total=7525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.926 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:27.926 filename0: (groupid=0, jobs=1): err= 0: pid=1118319: Sat Jul 20 18:11:02 2024 00:35:27.926 read: IOPS=2151, BW=16.8MiB/s (17.6MB/s)(84.1MiB/5003msec) 00:35:27.926 slat (nsec): min=4261, max=55508, avg=13283.74, stdev=4737.00 00:35:27.926 clat (usec): min=1760, max=6917, avg=3681.72, stdev=605.21 00:35:27.926 lat (usec): min=1772, max=6928, avg=3695.00, stdev=605.22 00:35:27.926 clat percentiles (usec): 00:35:27.926 | 1.00th=[ 2409], 5.00th=[ 2737], 10.00th=[ 2933], 20.00th=[ 3163], 00:35:27.927 | 30.00th=[ 3359], 40.00th=[ 3523], 50.00th=[ 3654], 60.00th=[ 3818], 00:35:27.927 | 70.00th=[ 3982], 80.00th=[ 4146], 90.00th=[ 4424], 95.00th=[ 4686], 00:35:27.927 | 99.00th=[ 5342], 99.50th=[ 5538], 99.90th=[ 5932], 99.95th=[ 6390], 00:35:27.927 | 99.99th=[ 6915] 00:35:27.927 bw ( KiB/s): min=16720, max=17776, per=28.36%, avg=17209.60, stdev=323.82, samples=10 00:35:27.927 iops : min= 2090, max= 2222, avg=2151.20, stdev=40.48, samples=10 00:35:27.927 lat (msec) : 2=0.08%, 4=71.21%, 10=28.70% 00:35:27.927 cpu : usr=93.04%, sys=5.98%, ctx=13, majf=0, minf=2 00:35:27.927 IO depths : 1=0.3%, 2=2.0%, 4=65.6%, 8=32.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.927 complete : 0=0.0%, 4=95.8%, 8=4.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.927 issued rwts: total=10762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.927 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:27.927 filename1: (groupid=0, jobs=1): err= 0: pid=1118320: Sat Jul 20 18:11:02 2024 00:35:27.927 read: IOPS=1879, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5001msec) 00:35:27.927 slat (nsec): min=4026, max=45886, avg=10812.01, stdev=3711.91 00:35:27.927 clat (usec): min=2237, max=47025, avg=4226.97, stdev=1428.85 00:35:27.927 lat (usec): min=2245, max=47036, avg=4237.78, stdev=1428.73 00:35:27.927 clat percentiles (usec): 00:35:27.927 | 1.00th=[ 2737], 5.00th=[ 3163], 10.00th=[ 3359], 20.00th=[ 3654], 00:35:27.927 | 30.00th=[ 3818], 40.00th=[ 3982], 50.00th=[ 4146], 60.00th=[ 4293], 00:35:27.927 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5080], 95.00th=[ 5473], 00:35:27.927 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 7963], 99.95th=[46924], 00:35:27.927 | 99.99th=[46924] 00:35:27.927 bw ( KiB/s): min=14076, max=15360, per=24.77%, avg=15030.00, stdev=396.97, samples=10 00:35:27.927 iops : min= 1759, max= 1920, avg=1878.70, stdev=49.76, samples=10 00:35:27.927 lat (msec) : 4=41.66%, 10=58.25%, 50=0.09% 00:35:27.927 cpu : usr=93.98%, sys=5.20%, ctx=24, majf=0, minf=0 00:35:27.927 IO depths : 1=0.2%, 2=1.5%, 4=66.9%, 8=31.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.927 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.927 issued rwts: total=9397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.927 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:27.927 filename1: (groupid=0, jobs=1): err= 0: pid=1118321: Sat Jul 20 18:11:02 2024 00:35:27.927 read: IOPS=2050, BW=16.0MiB/s (16.8MB/s)(80.2MiB/5003msec) 00:35:27.927 slat (nsec): min=3790, max=50638, avg=11226.90, stdev=4829.35 00:35:27.927 clat (usec): min=2058, max=6941, avg=3867.99, stdev=625.71 00:35:27.927 lat (usec): min=2072, max=6955, avg=3879.22, stdev=625.86 00:35:27.927 clat percentiles (usec): 00:35:27.927 | 1.00th=[ 2573], 5.00th=[ 2900], 10.00th=[ 3130], 20.00th=[ 3359], 00:35:27.927 | 30.00th=[ 3523], 40.00th=[ 3687], 50.00th=[ 3851], 60.00th=[ 3982], 00:35:27.927 | 70.00th=[ 4113], 80.00th=[ 4359], 90.00th=[ 4686], 95.00th=[ 4948], 00:35:27.927 | 99.00th=[ 5604], 99.50th=[ 5997], 99.90th=[ 6390], 99.95th=[ 6390], 00:35:27.927 | 99.99th=[ 6915] 00:35:27.927 bw ( KiB/s): min=15920, max=16862, per=27.12%, avg=16454.89, stdev=376.84, samples=9 00:35:27.927 iops : min= 1990, max= 2107, avg=2056.78, stdev=47.00, samples=9 00:35:27.927 lat (msec) : 4=61.38%, 10=38.62% 00:35:27.927 cpu : usr=93.96%, sys=5.26%, ctx=8, majf=0, minf=0 00:35:27.927 IO depths : 1=0.2%, 2=2.1%, 4=66.7%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.927 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.927 issued rwts: total=10260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.927 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:27.927 00:35:27.927 Run status group 0 (all jobs): 00:35:27.927 READ: bw=59.3MiB/s (62.1MB/s), 11.8MiB/s-16.8MiB/s (12.3MB/s-17.6MB/s), io=296MiB (311MB), run=5001-5003msec 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.927 00:35:27.927 real 0m24.369s 00:35:27.927 user 4m33.534s 00:35:27.927 sys 0m7.868s 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:27.927 18:11:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.927 ************************************ 00:35:27.927 END TEST fio_dif_rand_params 00:35:27.927 ************************************ 00:35:27.927 18:11:02 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:27.927 18:11:02 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:27.927 18:11:02 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:27.927 18:11:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:27.927 ************************************ 00:35:27.927 START TEST fio_dif_digest 00:35:27.927 ************************************ 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:27.927 bdev_null0 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:27.927 [2024-07-20 18:11:02.519418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:27.927 18:11:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:27.927 { 00:35:27.927 "params": { 00:35:27.927 "name": "Nvme$subsystem", 00:35:27.927 "trtype": "$TEST_TRANSPORT", 00:35:27.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.927 "adrfam": "ipv4", 00:35:27.927 "trsvcid": "$NVMF_PORT", 00:35:27.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.927 "hdgst": ${hdgst:-false}, 00:35:27.927 "ddgst": ${ddgst:-false} 00:35:27.927 }, 00:35:27.927 "method": "bdev_nvme_attach_controller" 00:35:27.927 } 00:35:27.927 EOF 00:35:27.927 )") 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:27.928 "params": { 00:35:27.928 "name": "Nvme0", 00:35:27.928 "trtype": "tcp", 00:35:27.928 "traddr": "10.0.0.2", 00:35:27.928 "adrfam": "ipv4", 00:35:27.928 "trsvcid": "4420", 00:35:27.928 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:27.928 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:27.928 "hdgst": true, 00:35:27.928 "ddgst": true 00:35:27.928 }, 00:35:27.928 "method": "bdev_nvme_attach_controller" 00:35:27.928 }' 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:27.928 18:11:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:28.186 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:28.186 ... 00:35:28.186 fio-3.35 00:35:28.186 Starting 3 threads 00:35:28.186 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.375 00:35:40.375 filename0: (groupid=0, jobs=1): err= 0: pid=1119076: Sat Jul 20 18:11:13 2024 00:35:40.375 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(275MiB/10048msec) 00:35:40.375 slat (nsec): min=4476, max=45303, avg=19103.04, stdev=5116.84 00:35:40.375 clat (usec): min=7741, max=95547, avg=13683.76, stdev=6474.62 00:35:40.375 lat (usec): min=7754, max=95562, avg=13702.86, stdev=6475.12 00:35:40.375 clat percentiles (usec): 00:35:40.375 | 1.00th=[ 8291], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[10945], 00:35:40.375 | 30.00th=[11863], 40.00th=[12780], 50.00th=[13304], 60.00th=[13698], 00:35:40.375 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15008], 95.00th=[15401], 00:35:40.375 | 99.00th=[54264], 99.50th=[54789], 99.90th=[56886], 99.95th=[93848], 00:35:40.375 | 99.99th=[95945] 00:35:40.375 bw ( KiB/s): min=22784, max=31744, per=38.75%, avg=28070.40, stdev=2516.89, samples=20 00:35:40.375 iops : min= 178, max= 248, avg=219.30, stdev=19.66, samples=20 00:35:40.375 lat (msec) : 10=8.33%, 20=89.48%, 50=0.18%, 100=2.00% 00:35:40.375 cpu : usr=84.73%, sys=11.36%, ctx=644, majf=0, minf=144 00:35:40.375 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:40.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.375 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.375 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:40.375 filename0: (groupid=0, jobs=1): err= 0: pid=1119077: Sat Jul 20 18:11:13 2024 00:35:40.375 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(241MiB/10048msec) 00:35:40.375 slat (nsec): min=4831, max=39856, avg=14489.94, stdev=1990.05 00:35:40.375 clat (usec): min=7966, max=96304, avg=15595.59, stdev=9913.03 00:35:40.375 lat (usec): min=7991, max=96318, avg=15610.08, stdev=9912.97 00:35:40.375 clat percentiles (usec): 00:35:40.375 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[10683], 20.00th=[11863], 00:35:40.375 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13829], 60.00th=[14222], 00:35:40.375 | 70.00th=[14484], 80.00th=[15008], 90.00th=[15664], 95.00th=[52167], 00:35:40.375 | 99.00th=[55837], 99.50th=[56361], 99.90th=[95945], 99.95th=[95945], 00:35:40.375 | 99.99th=[95945] 00:35:40.375 bw ( KiB/s): min=18176, max=29952, per=34.01%, avg=24642.40, stdev=2876.56, samples=20 00:35:40.375 iops : min= 142, max= 234, avg=192.50, stdev=22.48, samples=20 00:35:40.375 lat (msec) : 10=6.17%, 20=88.33%, 50=0.21%, 100=5.29% 00:35:40.375 cpu : usr=91.31%, sys=7.55%, ctx=297, majf=0, minf=183 00:35:40.375 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:40.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.375 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.375 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:40.375 filename0: (groupid=0, jobs=1): err= 0: pid=1119078: Sat Jul 20 18:11:13 2024 00:35:40.375 read: IOPS=155, BW=19.5MiB/s (20.4MB/s)(195MiB/10040msec) 00:35:40.375 slat (nsec): min=4893, max=33520, avg=14446.29, stdev=1548.45 00:35:40.375 clat (usec): min=9746, max=97359, avg=19248.73, stdev=11714.93 00:35:40.375 lat (usec): min=9760, max=97373, avg=19263.17, stdev=11714.96 00:35:40.375 clat percentiles (usec): 00:35:40.375 | 1.00th=[10421], 5.00th=[12518], 10.00th=[13304], 20.00th=[14615], 00:35:40.375 | 30.00th=[15270], 40.00th=[15795], 50.00th=[16188], 60.00th=[16581], 00:35:40.375 | 70.00th=[16909], 80.00th=[17695], 90.00th=[19006], 95.00th=[56361], 00:35:40.375 | 99.00th=[58459], 99.50th=[58983], 99.90th=[95945], 99.95th=[96994], 00:35:40.375 | 99.99th=[96994] 00:35:40.375 bw ( KiB/s): min=16640, max=22784, per=27.56%, avg=19968.00, stdev=1986.44, samples=20 00:35:40.375 iops : min= 130, max= 178, avg=156.00, stdev=15.52, samples=20 00:35:40.375 lat (msec) : 10=0.32%, 20=90.79%, 50=0.58%, 100=8.32% 00:35:40.375 cpu : usr=92.47%, sys=6.93%, ctx=35, majf=0, minf=68 00:35:40.375 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:40.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.375 issued rwts: total=1563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.375 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:40.375 00:35:40.375 Run status group 0 (all jobs): 00:35:40.375 READ: bw=70.7MiB/s (74.2MB/s), 19.5MiB/s-27.3MiB/s (20.4MB/s-28.6MB/s), io=711MiB (745MB), run=10040-10048msec 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.375 00:35:40.375 real 0m11.068s 00:35:40.375 user 0m27.929s 00:35:40.375 sys 0m2.884s 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:40.375 18:11:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:40.375 ************************************ 00:35:40.375 END TEST fio_dif_digest 00:35:40.375 ************************************ 00:35:40.375 18:11:13 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:40.375 18:11:13 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:40.375 18:11:13 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:40.375 18:11:13 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:40.375 18:11:13 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:40.375 18:11:13 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:40.375 18:11:13 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:40.376 18:11:13 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:40.376 rmmod nvme_tcp 00:35:40.376 rmmod nvme_fabrics 00:35:40.376 rmmod nvme_keyring 00:35:40.376 18:11:13 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:40.376 18:11:13 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:40.376 18:11:13 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:40.376 18:11:13 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1113024 ']' 00:35:40.376 18:11:13 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1113024 00:35:40.376 18:11:13 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 1113024 ']' 00:35:40.376 18:11:13 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 1113024 00:35:40.376 18:11:13 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:35:40.376 18:11:13 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:40.376 18:11:13 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1113024 00:35:40.376 18:11:13 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:40.376 18:11:13 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:40.376 18:11:13 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1113024' 00:35:40.376 killing process with pid 1113024 00:35:40.376 18:11:13 nvmf_dif -- common/autotest_common.sh@965 -- # kill 1113024 00:35:40.376 18:11:13 nvmf_dif -- common/autotest_common.sh@970 -- # wait 1113024 00:35:40.376 18:11:13 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:40.376 18:11:13 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:40.376 Waiting for block devices as requested 00:35:40.376 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:40.376 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:40.376 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:40.634 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:40.634 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:40.634 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:40.634 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:40.634 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:40.892 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:40.892 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:40.892 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:40.892 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:41.150 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:41.150 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:41.150 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:41.409 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:41.409 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:41.409 18:11:16 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:41.409 18:11:16 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:41.409 18:11:16 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:41.409 18:11:16 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:41.409 18:11:16 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.409 18:11:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:41.409 18:11:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.937 18:11:18 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:43.937 00:35:43.937 real 1m6.009s 00:35:43.937 user 6m28.350s 00:35:43.937 sys 0m19.758s 00:35:43.937 18:11:18 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:43.937 18:11:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:43.937 ************************************ 00:35:43.937 END TEST nvmf_dif 00:35:43.937 ************************************ 00:35:43.937 18:11:18 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:43.937 18:11:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:43.937 18:11:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:43.937 18:11:18 -- common/autotest_common.sh@10 -- # set +x 00:35:43.937 ************************************ 00:35:43.937 START TEST nvmf_abort_qd_sizes 00:35:43.937 ************************************ 00:35:43.937 18:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:43.937 * Looking for test storage... 00:35:43.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:43.937 18:11:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.937 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:43.937 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.937 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.937 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:43.938 18:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:45.314 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:45.314 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:45.314 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:45.572 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:45.572 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:45.573 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:45.573 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:45.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:35:45.573 00:35:45.573 --- 10.0.0.2 ping statistics --- 00:35:45.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.573 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:35:45.573 00:35:45.573 --- 10.0.0.1 ping statistics --- 00:35:45.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.573 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:45.573 18:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:46.949 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:46.949 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:46.949 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:46.949 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:46.949 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:46.949 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:46.949 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:46.949 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:46.949 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:46.949 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:46.949 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:46.949 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:46.949 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:46.949 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:46.949 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:46.949 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:47.884 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:47.884 18:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1123859 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1123859 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 1123859 ']' 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:47.885 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:47.885 [2024-07-20 18:11:22.612013] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:47.885 [2024-07-20 18:11:22.612107] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:47.885 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.143 [2024-07-20 18:11:22.686846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:48.143 [2024-07-20 18:11:22.780643] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:48.143 [2024-07-20 18:11:22.780711] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:48.143 [2024-07-20 18:11:22.780727] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:48.143 [2024-07-20 18:11:22.780741] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:48.143 [2024-07-20 18:11:22.780752] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:48.143 [2024-07-20 18:11:22.782815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:48.143 [2024-07-20 18:11:22.782844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:48.143 [2024-07-20 18:11:22.782963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:48.143 [2024-07-20 18:11:22.782966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:48.143 18:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:48.400 ************************************ 00:35:48.400 START TEST spdk_target_abort 00:35:48.400 ************************************ 00:35:48.400 18:11:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:35:48.400 18:11:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:48.400 18:11:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:35:48.400 18:11:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.400 18:11:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.691 spdk_targetn1 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.691 [2024-07-20 18:11:25.782987] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.691 [2024-07-20 18:11:25.815306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:51.691 18:11:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:51.691 EAL: No free 2048 kB hugepages reported on node 1 00:35:54.213 Initializing NVMe Controllers 00:35:54.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:54.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:54.213 Initialization complete. Launching workers. 00:35:54.213 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 7405, failed: 0 00:35:54.213 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1234, failed to submit 6171 00:35:54.213 success 859, unsuccess 375, failed 0 00:35:54.213 18:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:54.213 18:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:54.470 EAL: No free 2048 kB hugepages reported on node 1 00:35:57.811 Initializing NVMe Controllers 00:35:57.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:57.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:57.811 Initialization complete. Launching workers. 00:35:57.811 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8720, failed: 0 00:35:57.811 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1273, failed to submit 7447 00:35:57.811 success 278, unsuccess 995, failed 0 00:35:57.811 18:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:57.811 18:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:57.811 EAL: No free 2048 kB hugepages reported on node 1 00:36:01.091 Initializing NVMe Controllers 00:36:01.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:01.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:01.091 Initialization complete. Launching workers. 00:36:01.091 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31076, failed: 0 00:36:01.091 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2652, failed to submit 28424 00:36:01.091 success 516, unsuccess 2136, failed 0 00:36:01.091 18:11:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:01.091 18:11:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.091 18:11:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:01.091 18:11:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.091 18:11:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:01.091 18:11:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.091 18:11:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.462 18:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.462 18:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1123859 00:36:02.462 18:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 1123859 ']' 00:36:02.462 18:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 1123859 00:36:02.462 18:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:02.462 18:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:02.462 18:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1123859 00:36:02.462 18:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:02.462 18:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:02.462 18:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1123859' 00:36:02.462 killing process with pid 1123859 00:36:02.462 18:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 1123859 00:36:02.462 18:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 1123859 00:36:02.462 00:36:02.462 real 0m14.131s 00:36:02.462 user 0m52.862s 00:36:02.462 sys 0m2.867s 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.463 ************************************ 00:36:02.463 END TEST spdk_target_abort 00:36:02.463 ************************************ 00:36:02.463 18:11:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:02.463 18:11:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:02.463 18:11:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:02.463 18:11:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:02.463 ************************************ 00:36:02.463 START TEST kernel_target_abort 00:36:02.463 ************************************ 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:02.463 18:11:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:03.397 Waiting for block devices as requested 00:36:03.397 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:03.655 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:03.655 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:03.655 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:03.655 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:03.914 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:03.914 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:03.914 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:03.914 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:03.914 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:04.172 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:04.172 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:04.172 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:04.430 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:04.430 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:04.430 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:04.430 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:04.688 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:04.688 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:04.688 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:04.688 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:04.688 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:04.688 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:04.688 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:04.688 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:04.688 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:04.688 No valid GPT data, bailing 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:04.689 00:36:04.689 Discovery Log Number of Records 2, Generation counter 2 00:36:04.689 =====Discovery Log Entry 0====== 00:36:04.689 trtype: tcp 00:36:04.689 adrfam: ipv4 00:36:04.689 subtype: current discovery subsystem 00:36:04.689 treq: not specified, sq flow control disable supported 00:36:04.689 portid: 1 00:36:04.689 trsvcid: 4420 00:36:04.689 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:04.689 traddr: 10.0.0.1 00:36:04.689 eflags: none 00:36:04.689 sectype: none 00:36:04.689 =====Discovery Log Entry 1====== 00:36:04.689 trtype: tcp 00:36:04.689 adrfam: ipv4 00:36:04.689 subtype: nvme subsystem 00:36:04.689 treq: not specified, sq flow control disable supported 00:36:04.689 portid: 1 00:36:04.689 trsvcid: 4420 00:36:04.689 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:04.689 traddr: 10.0.0.1 00:36:04.689 eflags: none 00:36:04.689 sectype: none 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:04.689 18:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:04.689 EAL: No free 2048 kB hugepages reported on node 1 00:36:07.982 Initializing NVMe Controllers 00:36:07.982 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:07.982 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:07.982 Initialization complete. Launching workers. 00:36:07.982 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 22178, failed: 0 00:36:07.983 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22178, failed to submit 0 00:36:07.983 success 0, unsuccess 22178, failed 0 00:36:07.983 18:11:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:07.983 18:11:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:07.983 EAL: No free 2048 kB hugepages reported on node 1 00:36:11.259 Initializing NVMe Controllers 00:36:11.259 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:11.259 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:11.259 Initialization complete. Launching workers. 00:36:11.259 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 49927, failed: 0 00:36:11.259 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 12562, failed to submit 37365 00:36:11.259 success 0, unsuccess 12562, failed 0 00:36:11.259 18:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:11.259 18:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:11.259 EAL: No free 2048 kB hugepages reported on node 1 00:36:13.788 Initializing NVMe Controllers 00:36:13.788 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:13.788 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:13.788 Initialization complete. Launching workers. 00:36:13.788 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 45336, failed: 0 00:36:13.788 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 11302, failed to submit 34034 00:36:13.788 success 0, unsuccess 11302, failed 0 00:36:13.788 18:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:13.788 18:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:13.788 18:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:13.788 18:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:13.788 18:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:13.788 18:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:13.788 18:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:13.788 18:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:13.788 18:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:14.048 18:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:14.984 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:14.984 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:14.984 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:14.984 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:14.984 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:14.984 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:14.984 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:14.984 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:14.984 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:14.984 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:14.984 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:14.984 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:14.984 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:14.984 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:14.984 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:14.984 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:15.921 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:16.180 00:36:16.180 real 0m13.660s 00:36:16.180 user 0m3.593s 00:36:16.180 sys 0m3.169s 00:36:16.180 18:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:16.180 18:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.180 ************************************ 00:36:16.180 END TEST kernel_target_abort 00:36:16.180 ************************************ 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:16.180 rmmod nvme_tcp 00:36:16.180 rmmod nvme_fabrics 00:36:16.180 rmmod nvme_keyring 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1123859 ']' 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1123859 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 1123859 ']' 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 1123859 00:36:16.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1123859) - No such process 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 1123859 is not found' 00:36:16.180 Process with pid 1123859 is not found 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:16.180 18:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:17.113 Waiting for block devices as requested 00:36:17.113 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:17.371 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:17.371 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:17.371 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:17.371 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:17.629 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:17.629 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:17.629 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:17.629 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:17.886 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:17.886 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:17.886 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:17.886 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:18.143 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:18.143 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:18.143 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:18.143 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:18.401 18:11:52 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:18.401 18:11:52 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:18.401 18:11:52 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:18.401 18:11:52 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:18.401 18:11:52 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.401 18:11:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:18.401 18:11:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.294 18:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:20.294 00:36:20.294 real 0m36.835s 00:36:20.294 user 0m58.486s 00:36:20.294 sys 0m9.159s 00:36:20.294 18:11:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:20.294 18:11:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:20.294 ************************************ 00:36:20.294 END TEST nvmf_abort_qd_sizes 00:36:20.294 ************************************ 00:36:20.294 18:11:55 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:20.294 18:11:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:20.294 18:11:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:20.294 18:11:55 -- common/autotest_common.sh@10 -- # set +x 00:36:20.294 ************************************ 00:36:20.294 START TEST keyring_file 00:36:20.294 ************************************ 00:36:20.295 18:11:55 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:20.295 * Looking for test storage... 00:36:20.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:20.295 18:11:55 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:20.295 18:11:55 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:20.295 18:11:55 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:20.295 18:11:55 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:20.295 18:11:55 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:20.295 18:11:55 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:20.295 18:11:55 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:20.295 18:11:55 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:20.295 18:11:55 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:20.295 18:11:55 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:20.295 18:11:55 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:20.295 18:11:55 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:20.295 18:11:55 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:20.553 18:11:55 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:20.553 18:11:55 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:20.553 18:11:55 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:20.553 18:11:55 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.553 18:11:55 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.553 18:11:55 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.553 18:11:55 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:20.553 18:11:55 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:20.553 18:11:55 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:20.553 18:11:55 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:20.553 18:11:55 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:20.553 18:11:55 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:20.553 18:11:55 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:20.553 18:11:55 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jOtrSgKAn4 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jOtrSgKAn4 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jOtrSgKAn4 00:36:20.553 18:11:55 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.jOtrSgKAn4 00:36:20.553 18:11:55 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3o7Yy50aRN 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:20.553 18:11:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3o7Yy50aRN 00:36:20.553 18:11:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3o7Yy50aRN 00:36:20.553 18:11:55 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.3o7Yy50aRN 00:36:20.553 18:11:55 keyring_file -- keyring/file.sh@30 -- # tgtpid=1129367 00:36:20.553 18:11:55 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:20.553 18:11:55 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1129367 00:36:20.553 18:11:55 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1129367 ']' 00:36:20.553 18:11:55 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:20.553 18:11:55 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:20.553 18:11:55 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:20.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:20.553 18:11:55 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:20.553 18:11:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:20.553 [2024-07-20 18:11:55.228018] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:20.553 [2024-07-20 18:11:55.228116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1129367 ] 00:36:20.553 EAL: No free 2048 kB hugepages reported on node 1 00:36:20.553 [2024-07-20 18:11:55.293329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.811 [2024-07-20 18:11:55.389085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:21.070 18:11:55 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:21.070 [2024-07-20 18:11:55.648521] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:21.070 null0 00:36:21.070 [2024-07-20 18:11:55.680570] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:21.070 [2024-07-20 18:11:55.681091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:21.070 [2024-07-20 18:11:55.688568] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.070 18:11:55 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:21.070 [2024-07-20 18:11:55.700614] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:21.070 request: 00:36:21.070 { 00:36:21.070 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:21.070 "secure_channel": false, 00:36:21.070 "listen_address": { 00:36:21.070 "trtype": "tcp", 00:36:21.070 "traddr": "127.0.0.1", 00:36:21.070 "trsvcid": "4420" 00:36:21.070 }, 00:36:21.070 "method": "nvmf_subsystem_add_listener", 00:36:21.070 "req_id": 1 00:36:21.070 } 00:36:21.070 Got JSON-RPC error response 00:36:21.070 response: 00:36:21.070 { 00:36:21.070 "code": -32602, 00:36:21.070 "message": "Invalid parameters" 00:36:21.070 } 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:21.070 18:11:55 keyring_file -- keyring/file.sh@46 -- # bperfpid=1129494 00:36:21.070 18:11:55 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1129494 /var/tmp/bperf.sock 00:36:21.070 18:11:55 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1129494 ']' 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:21.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:21.070 18:11:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:21.070 [2024-07-20 18:11:55.748881] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:21.070 [2024-07-20 18:11:55.748956] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1129494 ] 00:36:21.070 EAL: No free 2048 kB hugepages reported on node 1 00:36:21.070 [2024-07-20 18:11:55.808756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.328 [2024-07-20 18:11:55.901158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.328 18:11:56 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:21.328 18:11:56 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:21.328 18:11:56 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jOtrSgKAn4 00:36:21.328 18:11:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jOtrSgKAn4 00:36:21.587 18:11:56 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3o7Yy50aRN 00:36:21.587 18:11:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3o7Yy50aRN 00:36:21.844 18:11:56 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:21.844 18:11:56 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:21.844 18:11:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:21.844 18:11:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:21.844 18:11:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.102 18:11:56 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.jOtrSgKAn4 == \/\t\m\p\/\t\m\p\.\j\O\t\r\S\g\K\A\n\4 ]] 00:36:22.102 18:11:56 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:22.102 18:11:56 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:22.102 18:11:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.102 18:11:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.102 18:11:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:22.361 18:11:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.3o7Yy50aRN == \/\t\m\p\/\t\m\p\.\3\o\7\Y\y\5\0\a\R\N ]] 00:36:22.361 18:11:57 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:22.361 18:11:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:22.361 18:11:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.361 18:11:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.361 18:11:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:22.361 18:11:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.619 18:11:57 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:22.619 18:11:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:22.619 18:11:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:22.619 18:11:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.619 18:11:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.619 18:11:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.619 18:11:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:22.877 18:11:57 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:22.877 18:11:57 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:22.877 18:11:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:23.135 [2024-07-20 18:11:57.724548] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:23.135 nvme0n1 00:36:23.135 18:11:57 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:23.135 18:11:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:23.135 18:11:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:23.135 18:11:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:23.135 18:11:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:23.135 18:11:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.393 18:11:58 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:23.393 18:11:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:23.393 18:11:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:23.393 18:11:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:23.393 18:11:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:23.393 18:11:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.393 18:11:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:23.651 18:11:58 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:23.651 18:11:58 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:23.910 Running I/O for 1 seconds... 00:36:24.842 00:36:24.842 Latency(us) 00:36:24.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.842 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:24.842 nvme0n1 : 1.03 3085.77 12.05 0.00 0.00 40976.60 4344.79 59030.95 00:36:24.842 =================================================================================================================== 00:36:24.842 Total : 3085.77 12.05 0.00 0.00 40976.60 4344.79 59030.95 00:36:24.842 0 00:36:24.842 18:11:59 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:24.842 18:11:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:25.099 18:11:59 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:25.099 18:11:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:25.100 18:11:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:25.100 18:11:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.100 18:11:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.100 18:11:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:25.358 18:12:00 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:25.358 18:12:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:25.358 18:12:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:25.358 18:12:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:25.358 18:12:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.358 18:12:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.358 18:12:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:25.616 18:12:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:25.616 18:12:00 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:25.616 18:12:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:25.616 18:12:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:25.616 18:12:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:25.616 18:12:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:25.616 18:12:00 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:25.616 18:12:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:25.616 18:12:00 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:25.616 18:12:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:25.875 [2024-07-20 18:12:00.534210] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:25.875 [2024-07-20 18:12:00.534631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73c310 (107): Transport endpoint is not connected 00:36:25.875 [2024-07-20 18:12:00.535617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73c310 (9): Bad file descriptor 00:36:25.875 [2024-07-20 18:12:00.536614] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:25.875 [2024-07-20 18:12:00.536637] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:25.875 [2024-07-20 18:12:00.536653] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:25.875 request: 00:36:25.875 { 00:36:25.875 "name": "nvme0", 00:36:25.875 "trtype": "tcp", 00:36:25.875 "traddr": "127.0.0.1", 00:36:25.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:25.875 "adrfam": "ipv4", 00:36:25.875 "trsvcid": "4420", 00:36:25.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:25.875 "psk": "key1", 00:36:25.875 "method": "bdev_nvme_attach_controller", 00:36:25.875 "req_id": 1 00:36:25.875 } 00:36:25.875 Got JSON-RPC error response 00:36:25.875 response: 00:36:25.875 { 00:36:25.875 "code": -5, 00:36:25.875 "message": "Input/output error" 00:36:25.875 } 00:36:25.875 18:12:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:25.875 18:12:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:25.875 18:12:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:25.875 18:12:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:25.875 18:12:00 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:25.875 18:12:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:25.875 18:12:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:25.875 18:12:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.876 18:12:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.876 18:12:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:26.141 18:12:00 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:26.141 18:12:00 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:26.141 18:12:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:26.142 18:12:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:26.142 18:12:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:26.142 18:12:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.142 18:12:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:26.401 18:12:01 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:26.401 18:12:01 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:26.401 18:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:26.657 18:12:01 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:26.657 18:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:26.914 18:12:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:26.914 18:12:01 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:26.914 18:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.170 18:12:01 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:27.170 18:12:01 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.jOtrSgKAn4 00:36:27.170 18:12:01 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.jOtrSgKAn4 00:36:27.170 18:12:01 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:27.170 18:12:01 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.jOtrSgKAn4 00:36:27.170 18:12:01 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:27.170 18:12:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:27.170 18:12:01 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:27.170 18:12:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:27.170 18:12:01 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jOtrSgKAn4 00:36:27.170 18:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jOtrSgKAn4 00:36:27.427 [2024-07-20 18:12:02.082515] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jOtrSgKAn4': 0100660 00:36:27.427 [2024-07-20 18:12:02.082551] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:27.427 request: 00:36:27.427 { 00:36:27.427 "name": "key0", 00:36:27.427 "path": "/tmp/tmp.jOtrSgKAn4", 00:36:27.427 "method": "keyring_file_add_key", 00:36:27.427 "req_id": 1 00:36:27.427 } 00:36:27.427 Got JSON-RPC error response 00:36:27.427 response: 00:36:27.427 { 00:36:27.427 "code": -1, 00:36:27.427 "message": "Operation not permitted" 00:36:27.427 } 00:36:27.427 18:12:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:27.427 18:12:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:27.427 18:12:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:27.427 18:12:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:27.427 18:12:02 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.jOtrSgKAn4 00:36:27.427 18:12:02 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jOtrSgKAn4 00:36:27.427 18:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jOtrSgKAn4 00:36:27.683 18:12:02 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.jOtrSgKAn4 00:36:27.683 18:12:02 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:27.683 18:12:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:27.683 18:12:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:27.683 18:12:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.683 18:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.683 18:12:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:27.940 18:12:02 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:27.940 18:12:02 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:27.940 18:12:02 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:27.940 18:12:02 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:27.940 18:12:02 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:27.940 18:12:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:27.940 18:12:02 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:27.940 18:12:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:27.940 18:12:02 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:27.940 18:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:28.233 [2024-07-20 18:12:02.844557] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.jOtrSgKAn4': No such file or directory 00:36:28.233 [2024-07-20 18:12:02.844590] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:28.233 [2024-07-20 18:12:02.844616] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:28.233 [2024-07-20 18:12:02.844626] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:28.233 [2024-07-20 18:12:02.844636] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:28.233 request: 00:36:28.233 { 00:36:28.233 "name": "nvme0", 00:36:28.233 "trtype": "tcp", 00:36:28.233 "traddr": "127.0.0.1", 00:36:28.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:28.233 "adrfam": "ipv4", 00:36:28.233 "trsvcid": "4420", 00:36:28.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:28.233 "psk": "key0", 00:36:28.233 "method": "bdev_nvme_attach_controller", 00:36:28.233 "req_id": 1 00:36:28.233 } 00:36:28.233 Got JSON-RPC error response 00:36:28.233 response: 00:36:28.233 { 00:36:28.233 "code": -19, 00:36:28.233 "message": "No such device" 00:36:28.233 } 00:36:28.233 18:12:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:28.233 18:12:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:28.233 18:12:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:28.233 18:12:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:28.233 18:12:02 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:28.233 18:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:28.489 18:12:03 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:28.489 18:12:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:28.489 18:12:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:28.489 18:12:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:28.489 18:12:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:28.489 18:12:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:28.489 18:12:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.z23VSelbfe 00:36:28.489 18:12:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:28.489 18:12:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:28.489 18:12:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:28.489 18:12:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:28.489 18:12:03 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:28.489 18:12:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:28.489 18:12:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:28.489 18:12:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.z23VSelbfe 00:36:28.489 18:12:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.z23VSelbfe 00:36:28.489 18:12:03 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.z23VSelbfe 00:36:28.489 18:12:03 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z23VSelbfe 00:36:28.489 18:12:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z23VSelbfe 00:36:28.744 18:12:03 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:28.744 18:12:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:29.000 nvme0n1 00:36:29.000 18:12:03 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:29.000 18:12:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:29.000 18:12:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:29.000 18:12:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:29.000 18:12:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.000 18:12:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:29.256 18:12:04 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:29.256 18:12:04 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:29.256 18:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:29.513 18:12:04 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:29.513 18:12:04 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:29.513 18:12:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:29.513 18:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.513 18:12:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:29.770 18:12:04 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:29.770 18:12:04 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:29.770 18:12:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:29.770 18:12:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:29.770 18:12:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:29.770 18:12:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:29.770 18:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:30.027 18:12:04 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:30.027 18:12:04 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:30.027 18:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:30.284 18:12:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:30.284 18:12:05 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:30.284 18:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:30.542 18:12:05 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:30.542 18:12:05 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z23VSelbfe 00:36:30.542 18:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z23VSelbfe 00:36:30.799 18:12:05 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3o7Yy50aRN 00:36:30.799 18:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3o7Yy50aRN 00:36:31.056 18:12:05 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:31.056 18:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:31.313 nvme0n1 00:36:31.570 18:12:06 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:31.570 18:12:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:31.828 18:12:06 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:31.828 "subsystems": [ 00:36:31.828 { 00:36:31.828 "subsystem": "keyring", 00:36:31.828 "config": [ 00:36:31.828 { 00:36:31.828 "method": "keyring_file_add_key", 00:36:31.828 "params": { 00:36:31.828 "name": "key0", 00:36:31.828 "path": "/tmp/tmp.z23VSelbfe" 00:36:31.828 } 00:36:31.828 }, 00:36:31.828 { 00:36:31.828 "method": "keyring_file_add_key", 00:36:31.828 "params": { 00:36:31.828 "name": "key1", 00:36:31.828 "path": "/tmp/tmp.3o7Yy50aRN" 00:36:31.828 } 00:36:31.828 } 00:36:31.828 ] 00:36:31.828 }, 00:36:31.828 { 00:36:31.828 "subsystem": "iobuf", 00:36:31.829 "config": [ 00:36:31.829 { 00:36:31.829 "method": "iobuf_set_options", 00:36:31.829 "params": { 00:36:31.829 "small_pool_count": 8192, 00:36:31.829 "large_pool_count": 1024, 00:36:31.829 "small_bufsize": 8192, 00:36:31.829 "large_bufsize": 135168 00:36:31.829 } 00:36:31.829 } 00:36:31.829 ] 00:36:31.829 }, 00:36:31.829 { 00:36:31.829 "subsystem": "sock", 00:36:31.829 "config": [ 00:36:31.829 { 00:36:31.829 "method": "sock_set_default_impl", 00:36:31.829 "params": { 00:36:31.829 "impl_name": "posix" 00:36:31.829 } 00:36:31.829 }, 00:36:31.829 { 00:36:31.829 "method": "sock_impl_set_options", 00:36:31.829 "params": { 00:36:31.829 "impl_name": "ssl", 00:36:31.829 "recv_buf_size": 4096, 00:36:31.829 "send_buf_size": 4096, 00:36:31.829 "enable_recv_pipe": true, 00:36:31.829 "enable_quickack": false, 00:36:31.829 "enable_placement_id": 0, 00:36:31.829 "enable_zerocopy_send_server": true, 00:36:31.829 "enable_zerocopy_send_client": false, 00:36:31.829 "zerocopy_threshold": 0, 00:36:31.829 "tls_version": 0, 00:36:31.829 "enable_ktls": false 00:36:31.829 } 00:36:31.829 }, 00:36:31.829 { 00:36:31.829 "method": "sock_impl_set_options", 00:36:31.829 "params": { 00:36:31.829 "impl_name": "posix", 00:36:31.829 "recv_buf_size": 2097152, 00:36:31.829 "send_buf_size": 2097152, 00:36:31.829 "enable_recv_pipe": true, 00:36:31.829 "enable_quickack": false, 00:36:31.829 "enable_placement_id": 0, 00:36:31.829 "enable_zerocopy_send_server": true, 00:36:31.829 "enable_zerocopy_send_client": false, 00:36:31.829 "zerocopy_threshold": 0, 00:36:31.829 "tls_version": 0, 00:36:31.829 "enable_ktls": false 00:36:31.829 } 00:36:31.829 } 00:36:31.829 ] 00:36:31.829 }, 00:36:31.829 { 00:36:31.829 "subsystem": "vmd", 00:36:31.829 "config": [] 00:36:31.829 }, 00:36:31.829 { 00:36:31.829 "subsystem": "accel", 00:36:31.829 "config": [ 00:36:31.829 { 00:36:31.829 "method": "accel_set_options", 00:36:31.829 "params": { 00:36:31.829 "small_cache_size": 128, 00:36:31.829 "large_cache_size": 16, 00:36:31.829 "task_count": 2048, 00:36:31.829 "sequence_count": 2048, 00:36:31.829 "buf_count": 2048 00:36:31.829 } 00:36:31.829 } 00:36:31.829 ] 00:36:31.829 }, 00:36:31.829 { 00:36:31.829 "subsystem": "bdev", 00:36:31.829 "config": [ 00:36:31.829 { 00:36:31.829 "method": "bdev_set_options", 00:36:31.829 "params": { 00:36:31.829 "bdev_io_pool_size": 65535, 00:36:31.829 "bdev_io_cache_size": 256, 00:36:31.829 "bdev_auto_examine": true, 00:36:31.829 "iobuf_small_cache_size": 128, 00:36:31.829 "iobuf_large_cache_size": 16 00:36:31.829 } 00:36:31.829 }, 00:36:31.829 { 00:36:31.829 "method": "bdev_raid_set_options", 00:36:31.829 "params": { 00:36:31.829 "process_window_size_kb": 1024 00:36:31.829 } 00:36:31.829 }, 00:36:31.829 { 00:36:31.829 "method": "bdev_iscsi_set_options", 00:36:31.829 "params": { 00:36:31.829 "timeout_sec": 30 00:36:31.829 } 00:36:31.829 }, 00:36:31.829 { 00:36:31.829 "method": "bdev_nvme_set_options", 00:36:31.829 "params": { 00:36:31.829 "action_on_timeout": "none", 00:36:31.829 "timeout_us": 0, 00:36:31.829 "timeout_admin_us": 0, 00:36:31.829 "keep_alive_timeout_ms": 10000, 00:36:31.829 "arbitration_burst": 0, 00:36:31.829 "low_priority_weight": 0, 00:36:31.829 "medium_priority_weight": 0, 00:36:31.829 "high_priority_weight": 0, 00:36:31.829 "nvme_adminq_poll_period_us": 10000, 00:36:31.829 "nvme_ioq_poll_period_us": 0, 00:36:31.829 "io_queue_requests": 512, 00:36:31.829 "delay_cmd_submit": true, 00:36:31.829 "transport_retry_count": 4, 00:36:31.829 "bdev_retry_count": 3, 00:36:31.829 "transport_ack_timeout": 0, 00:36:31.829 "ctrlr_loss_timeout_sec": 0, 00:36:31.829 "reconnect_delay_sec": 0, 00:36:31.829 "fast_io_fail_timeout_sec": 0, 00:36:31.829 "disable_auto_failback": false, 00:36:31.829 "generate_uuids": false, 00:36:31.829 "transport_tos": 0, 00:36:31.829 "nvme_error_stat": false, 00:36:31.829 "rdma_srq_size": 0, 00:36:31.829 "io_path_stat": false, 00:36:31.829 "allow_accel_sequence": false, 00:36:31.829 "rdma_max_cq_size": 0, 00:36:31.829 "rdma_cm_event_timeout_ms": 0, 00:36:31.829 "dhchap_digests": [ 00:36:31.829 "sha256", 00:36:31.829 "sha384", 00:36:31.829 "sha512" 00:36:31.829 ], 00:36:31.829 "dhchap_dhgroups": [ 00:36:31.829 "null", 00:36:31.829 "ffdhe2048", 00:36:31.829 "ffdhe3072", 00:36:31.829 "ffdhe4096", 00:36:31.829 "ffdhe6144", 00:36:31.829 "ffdhe8192" 00:36:31.829 ] 00:36:31.829 } 00:36:31.829 }, 00:36:31.829 { 00:36:31.829 "method": "bdev_nvme_attach_controller", 00:36:31.829 "params": { 00:36:31.829 "name": "nvme0", 00:36:31.829 "trtype": "TCP", 00:36:31.829 "adrfam": "IPv4", 00:36:31.829 "traddr": "127.0.0.1", 00:36:31.829 "trsvcid": "4420", 00:36:31.829 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:31.829 "prchk_reftag": false, 00:36:31.829 "prchk_guard": false, 00:36:31.829 "ctrlr_loss_timeout_sec": 0, 00:36:31.829 "reconnect_delay_sec": 0, 00:36:31.829 "fast_io_fail_timeout_sec": 0, 00:36:31.829 "psk": "key0", 00:36:31.829 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:31.829 "hdgst": false, 00:36:31.829 "ddgst": false 00:36:31.829 } 00:36:31.829 }, 00:36:31.829 { 00:36:31.829 "method": "bdev_nvme_set_hotplug", 00:36:31.829 "params": { 00:36:31.829 "period_us": 100000, 00:36:31.829 "enable": false 00:36:31.829 } 00:36:31.829 }, 00:36:31.829 { 00:36:31.829 "method": "bdev_wait_for_examine" 00:36:31.829 } 00:36:31.829 ] 00:36:31.829 }, 00:36:31.829 { 00:36:31.829 "subsystem": "nbd", 00:36:31.829 "config": [] 00:36:31.829 } 00:36:31.829 ] 00:36:31.829 }' 00:36:31.829 18:12:06 keyring_file -- keyring/file.sh@114 -- # killprocess 1129494 00:36:31.829 18:12:06 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1129494 ']' 00:36:31.829 18:12:06 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1129494 00:36:31.829 18:12:06 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:31.829 18:12:06 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:31.829 18:12:06 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1129494 00:36:31.829 18:12:06 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:31.829 18:12:06 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:31.829 18:12:06 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1129494' 00:36:31.829 killing process with pid 1129494 00:36:31.829 18:12:06 keyring_file -- common/autotest_common.sh@965 -- # kill 1129494 00:36:31.829 Received shutdown signal, test time was about 1.000000 seconds 00:36:31.829 00:36:31.829 Latency(us) 00:36:31.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:31.829 =================================================================================================================== 00:36:31.829 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:31.829 18:12:06 keyring_file -- common/autotest_common.sh@970 -- # wait 1129494 00:36:32.087 18:12:06 keyring_file -- keyring/file.sh@117 -- # bperfpid=1131041 00:36:32.087 18:12:06 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1131041 /var/tmp/bperf.sock 00:36:32.087 18:12:06 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1131041 ']' 00:36:32.087 18:12:06 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:32.088 18:12:06 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:32.088 18:12:06 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:32.088 18:12:06 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:32.088 18:12:06 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:32.088 "subsystems": [ 00:36:32.088 { 00:36:32.088 "subsystem": "keyring", 00:36:32.088 "config": [ 00:36:32.088 { 00:36:32.088 "method": "keyring_file_add_key", 00:36:32.088 "params": { 00:36:32.088 "name": "key0", 00:36:32.088 "path": "/tmp/tmp.z23VSelbfe" 00:36:32.088 } 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "method": "keyring_file_add_key", 00:36:32.088 "params": { 00:36:32.088 "name": "key1", 00:36:32.088 "path": "/tmp/tmp.3o7Yy50aRN" 00:36:32.088 } 00:36:32.088 } 00:36:32.088 ] 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "subsystem": "iobuf", 00:36:32.088 "config": [ 00:36:32.088 { 00:36:32.088 "method": "iobuf_set_options", 00:36:32.088 "params": { 00:36:32.088 "small_pool_count": 8192, 00:36:32.088 "large_pool_count": 1024, 00:36:32.088 "small_bufsize": 8192, 00:36:32.088 "large_bufsize": 135168 00:36:32.088 } 00:36:32.088 } 00:36:32.088 ] 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "subsystem": "sock", 00:36:32.088 "config": [ 00:36:32.088 { 00:36:32.088 "method": "sock_set_default_impl", 00:36:32.088 "params": { 00:36:32.088 "impl_name": "posix" 00:36:32.088 } 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "method": "sock_impl_set_options", 00:36:32.088 "params": { 00:36:32.088 "impl_name": "ssl", 00:36:32.088 "recv_buf_size": 4096, 00:36:32.088 "send_buf_size": 4096, 00:36:32.088 "enable_recv_pipe": true, 00:36:32.088 "enable_quickack": false, 00:36:32.088 "enable_placement_id": 0, 00:36:32.088 "enable_zerocopy_send_server": true, 00:36:32.088 "enable_zerocopy_send_client": false, 00:36:32.088 "zerocopy_threshold": 0, 00:36:32.088 "tls_version": 0, 00:36:32.088 "enable_ktls": false 00:36:32.088 } 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "method": "sock_impl_set_options", 00:36:32.088 "params": { 00:36:32.088 "impl_name": "posix", 00:36:32.088 "recv_buf_size": 2097152, 00:36:32.088 "send_buf_size": 2097152, 00:36:32.088 "enable_recv_pipe": true, 00:36:32.088 "enable_quickack": false, 00:36:32.088 "enable_placement_id": 0, 00:36:32.088 "enable_zerocopy_send_server": true, 00:36:32.088 "enable_zerocopy_send_client": false, 00:36:32.088 "zerocopy_threshold": 0, 00:36:32.088 "tls_version": 0, 00:36:32.088 "enable_ktls": false 00:36:32.088 } 00:36:32.088 } 00:36:32.088 ] 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "subsystem": "vmd", 00:36:32.088 "config": [] 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "subsystem": "accel", 00:36:32.088 "config": [ 00:36:32.088 { 00:36:32.088 "method": "accel_set_options", 00:36:32.088 "params": { 00:36:32.088 "small_cache_size": 128, 00:36:32.088 "large_cache_size": 16, 00:36:32.088 "task_count": 2048, 00:36:32.088 "sequence_count": 2048, 00:36:32.088 "buf_count": 2048 00:36:32.088 } 00:36:32.088 } 00:36:32.088 ] 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "subsystem": "bdev", 00:36:32.088 "config": [ 00:36:32.088 { 00:36:32.088 "method": "bdev_set_options", 00:36:32.088 "params": { 00:36:32.088 "bdev_io_pool_size": 65535, 00:36:32.088 "bdev_io_cache_size": 256, 00:36:32.088 "bdev_auto_examine": true, 00:36:32.088 "iobuf_small_cache_size": 128, 00:36:32.088 "iobuf_large_cache_size": 16 00:36:32.088 } 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "method": "bdev_raid_set_options", 00:36:32.088 "params": { 00:36:32.088 "process_window_size_kb": 1024 00:36:32.088 } 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "method": "bdev_iscsi_set_options", 00:36:32.088 "params": { 00:36:32.088 "timeout_sec": 30 00:36:32.088 } 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "method": "bdev_nvme_set_options", 00:36:32.088 "params": { 00:36:32.088 "action_on_timeout": "none", 00:36:32.088 "timeout_us": 0, 00:36:32.088 "timeout_admin_us": 0, 00:36:32.088 "keep_alive_timeout_ms": 10000, 00:36:32.088 "arbitration_burst": 0, 00:36:32.088 "low_priority_weight": 0, 00:36:32.088 "medium_priority_weight": 0, 00:36:32.088 "high_priority_weight": 0, 00:36:32.088 "nvme_adminq_poll_period_us": 10000, 00:36:32.088 "nvme_ioq_poll_period_us": 0, 00:36:32.088 "io_queue_requests": 512, 00:36:32.088 "delay_cmd_submit": true, 00:36:32.088 "transport_retry_count": 4, 00:36:32.088 "bdev_retry_count": 3, 00:36:32.088 "transport_ack_timeout": 0, 00:36:32.088 "ctrlr_loss_timeout_sec": 0, 00:36:32.088 "reconnect_delay_sec": 0, 00:36:32.088 "fast_io_fail_timeout_sec": 0, 00:36:32.088 "disable_auto_failback": false, 00:36:32.088 "generate_uuids": false, 00:36:32.088 "transport_tos": 0, 00:36:32.088 "nvme_error_stat": false, 00:36:32.088 "rdma_srq_size": 0, 00:36:32.088 "io_path_stat": false, 00:36:32.088 "allow_accel_sequence": false, 00:36:32.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:32.088 "rdma_max_cq_size": 0, 00:36:32.088 "rdma_cm_event_timeout_ms": 0, 00:36:32.088 "dhchap_digests": [ 00:36:32.088 "sha256", 00:36:32.088 "sha384", 00:36:32.088 "sha512" 00:36:32.088 ], 00:36:32.088 "dhchap_dhgroups": [ 00:36:32.088 "null", 00:36:32.088 "ffdhe2048", 00:36:32.088 "ffdhe3072", 00:36:32.088 "ffdhe4096", 00:36:32.088 "ffdhe6144", 00:36:32.088 "ffdhe8192" 00:36:32.088 ] 00:36:32.088 } 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "method": "bdev_nvme_attach_controller", 00:36:32.088 "params": { 00:36:32.088 "name": "nvme0", 00:36:32.088 "trtype": "TCP", 00:36:32.088 "adrfam": "IPv4", 00:36:32.088 "traddr": "127.0.0.1", 00:36:32.088 "trsvcid": "4420", 00:36:32.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:32.088 "prchk_reftag": false, 00:36:32.088 "prchk_guard": false, 00:36:32.088 "ctrlr_loss_timeout_sec": 0, 00:36:32.088 "reconnect_delay_sec": 0, 00:36:32.088 "fast_io_fail_timeout_sec": 0, 00:36:32.088 "psk": "key0", 00:36:32.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:32.088 "hdgst": false, 00:36:32.088 "ddgst": false 00:36:32.088 } 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "method": "bdev_nvme_set_hotplug", 00:36:32.088 "params": { 00:36:32.088 "period_us": 100000, 00:36:32.088 "enable": false 00:36:32.088 } 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "method": "bdev_wait_for_examine" 00:36:32.088 } 00:36:32.088 ] 00:36:32.088 }, 00:36:32.088 { 00:36:32.088 "subsystem": "nbd", 00:36:32.088 "config": [] 00:36:32.088 } 00:36:32.088 ] 00:36:32.088 }' 00:36:32.088 18:12:06 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:32.088 18:12:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:32.088 [2024-07-20 18:12:06.716815] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:32.088 [2024-07-20 18:12:06.716906] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131041 ] 00:36:32.088 EAL: No free 2048 kB hugepages reported on node 1 00:36:32.088 [2024-07-20 18:12:06.779612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.088 [2024-07-20 18:12:06.869056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:32.346 [2024-07-20 18:12:07.059372] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:32.911 18:12:07 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:32.911 18:12:07 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:32.911 18:12:07 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:32.911 18:12:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:32.911 18:12:07 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:33.168 18:12:07 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:33.168 18:12:07 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:33.426 18:12:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:33.426 18:12:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:33.426 18:12:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:33.426 18:12:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.426 18:12:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:33.426 18:12:08 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:33.426 18:12:08 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:33.683 18:12:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:33.683 18:12:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:33.683 18:12:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:33.683 18:12:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.683 18:12:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:33.940 18:12:08 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:33.940 18:12:08 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:33.940 18:12:08 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:33.940 18:12:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:34.198 18:12:08 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:34.198 18:12:08 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:34.198 18:12:08 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.z23VSelbfe /tmp/tmp.3o7Yy50aRN 00:36:34.198 18:12:08 keyring_file -- keyring/file.sh@20 -- # killprocess 1131041 00:36:34.198 18:12:08 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1131041 ']' 00:36:34.198 18:12:08 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1131041 00:36:34.198 18:12:08 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:34.198 18:12:08 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:34.198 18:12:08 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1131041 00:36:34.198 18:12:08 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:34.198 18:12:08 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:34.198 18:12:08 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1131041' 00:36:34.198 killing process with pid 1131041 00:36:34.198 18:12:08 keyring_file -- common/autotest_common.sh@965 -- # kill 1131041 00:36:34.198 Received shutdown signal, test time was about 1.000000 seconds 00:36:34.198 00:36:34.198 Latency(us) 00:36:34.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:34.198 =================================================================================================================== 00:36:34.198 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:34.198 18:12:08 keyring_file -- common/autotest_common.sh@970 -- # wait 1131041 00:36:34.456 18:12:08 keyring_file -- keyring/file.sh@21 -- # killprocess 1129367 00:36:34.456 18:12:08 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1129367 ']' 00:36:34.456 18:12:08 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1129367 00:36:34.456 18:12:08 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:34.456 18:12:09 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:34.456 18:12:09 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1129367 00:36:34.456 18:12:09 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:34.456 18:12:09 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:34.456 18:12:09 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1129367' 00:36:34.456 killing process with pid 1129367 00:36:34.456 18:12:09 keyring_file -- common/autotest_common.sh@965 -- # kill 1129367 00:36:34.456 [2024-07-20 18:12:09.025982] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:34.456 18:12:09 keyring_file -- common/autotest_common.sh@970 -- # wait 1129367 00:36:34.715 00:36:34.715 real 0m14.401s 00:36:34.715 user 0m35.481s 00:36:34.715 sys 0m3.244s 00:36:34.715 18:12:09 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:34.715 18:12:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:34.715 ************************************ 00:36:34.715 END TEST keyring_file 00:36:34.715 ************************************ 00:36:34.715 18:12:09 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:34.715 18:12:09 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:34.715 18:12:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:34.715 18:12:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:34.715 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:36:34.715 ************************************ 00:36:34.715 START TEST keyring_linux 00:36:34.715 ************************************ 00:36:34.715 18:12:09 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:34.974 * Looking for test storage... 00:36:34.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:34.974 18:12:09 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:34.974 18:12:09 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:34.974 18:12:09 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:34.974 18:12:09 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:34.974 18:12:09 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.974 18:12:09 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.974 18:12:09 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.974 18:12:09 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:34.974 18:12:09 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:34.974 18:12:09 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:34.974 18:12:09 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:34.974 18:12:09 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:34.974 18:12:09 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:34.974 18:12:09 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:34.974 18:12:09 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:34.974 /tmp/:spdk-test:key0 00:36:34.974 18:12:09 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:34.974 18:12:09 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:34.974 18:12:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:34.974 /tmp/:spdk-test:key1 00:36:34.974 18:12:09 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1131933 00:36:34.974 18:12:09 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:34.974 18:12:09 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1131933 00:36:34.974 18:12:09 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 1131933 ']' 00:36:34.974 18:12:09 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.974 18:12:09 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:34.974 18:12:09 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.974 18:12:09 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:34.974 18:12:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:34.974 [2024-07-20 18:12:09.700751] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:34.974 [2024-07-20 18:12:09.700872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131933 ] 00:36:34.974 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.974 [2024-07-20 18:12:09.760426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.233 [2024-07-20 18:12:09.851741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.492 18:12:10 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:35.492 18:12:10 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:36:35.492 18:12:10 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:35.492 18:12:10 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.492 18:12:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:35.492 [2024-07-20 18:12:10.100233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:35.492 null0 00:36:35.492 [2024-07-20 18:12:10.132280] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:35.492 [2024-07-20 18:12:10.132704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:35.492 18:12:10 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.492 18:12:10 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:35.492 863537383 00:36:35.492 18:12:10 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:35.492 156012204 00:36:35.492 18:12:10 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1132067 00:36:35.492 18:12:10 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1132067 /var/tmp/bperf.sock 00:36:35.492 18:12:10 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:35.492 18:12:10 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 1132067 ']' 00:36:35.492 18:12:10 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:35.492 18:12:10 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:35.492 18:12:10 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:35.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:35.492 18:12:10 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:35.492 18:12:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:35.492 [2024-07-20 18:12:10.194853] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:35.492 [2024-07-20 18:12:10.194945] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132067 ] 00:36:35.492 EAL: No free 2048 kB hugepages reported on node 1 00:36:35.492 [2024-07-20 18:12:10.253348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.750 [2024-07-20 18:12:10.339497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.750 18:12:10 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:35.750 18:12:10 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:36:35.750 18:12:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:35.750 18:12:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:36.009 18:12:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:36.009 18:12:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:36.267 18:12:10 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:36.267 18:12:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:36.525 [2024-07-20 18:12:11.197465] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:36.525 nvme0n1 00:36:36.525 18:12:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:36.525 18:12:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:36.525 18:12:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:36.525 18:12:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:36.525 18:12:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:36.525 18:12:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.783 18:12:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:36.783 18:12:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:36.783 18:12:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:36.783 18:12:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:36.783 18:12:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:36.783 18:12:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.783 18:12:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:37.041 18:12:11 keyring_linux -- keyring/linux.sh@25 -- # sn=863537383 00:36:37.041 18:12:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:37.041 18:12:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:37.041 18:12:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 863537383 == \8\6\3\5\3\7\3\8\3 ]] 00:36:37.041 18:12:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 863537383 00:36:37.041 18:12:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:37.041 18:12:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:37.299 Running I/O for 1 seconds... 00:36:38.234 00:36:38.234 Latency(us) 00:36:38.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.234 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:38.234 nvme0n1 : 1.04 2213.90 8.65 0.00 0.00 56698.28 10485.76 70681.79 00:36:38.234 =================================================================================================================== 00:36:38.234 Total : 2213.90 8.65 0.00 0.00 56698.28 10485.76 70681.79 00:36:38.234 0 00:36:38.234 18:12:12 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:38.234 18:12:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:38.491 18:12:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:38.491 18:12:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:38.491 18:12:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:38.491 18:12:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:38.491 18:12:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.491 18:12:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:38.747 18:12:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:38.747 18:12:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:38.747 18:12:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:38.747 18:12:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:38.747 18:12:13 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:36:38.747 18:12:13 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:38.747 18:12:13 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:38.747 18:12:13 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:38.747 18:12:13 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:38.747 18:12:13 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:38.747 18:12:13 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:38.747 18:12:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:39.004 [2024-07-20 18:12:13.673214] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:39.004 [2024-07-20 18:12:13.673681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1994270 (107): Transport endpoint is not connected 00:36:39.004 [2024-07-20 18:12:13.674669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1994270 (9): Bad file descriptor 00:36:39.004 [2024-07-20 18:12:13.675667] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:39.004 [2024-07-20 18:12:13.675691] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:39.004 [2024-07-20 18:12:13.675706] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:39.004 request: 00:36:39.004 { 00:36:39.004 "name": "nvme0", 00:36:39.004 "trtype": "tcp", 00:36:39.004 "traddr": "127.0.0.1", 00:36:39.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:39.004 "adrfam": "ipv4", 00:36:39.004 "trsvcid": "4420", 00:36:39.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:39.004 "psk": ":spdk-test:key1", 00:36:39.004 "method": "bdev_nvme_attach_controller", 00:36:39.004 "req_id": 1 00:36:39.004 } 00:36:39.004 Got JSON-RPC error response 00:36:39.004 response: 00:36:39.004 { 00:36:39.004 "code": -5, 00:36:39.004 "message": "Input/output error" 00:36:39.004 } 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@33 -- # sn=863537383 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 863537383 00:36:39.004 1 links removed 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@33 -- # sn=156012204 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 156012204 00:36:39.004 1 links removed 00:36:39.004 18:12:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1132067 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 1132067 ']' 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 1132067 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1132067 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1132067' 00:36:39.004 killing process with pid 1132067 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@965 -- # kill 1132067 00:36:39.004 Received shutdown signal, test time was about 1.000000 seconds 00:36:39.004 00:36:39.004 Latency(us) 00:36:39.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.004 =================================================================================================================== 00:36:39.004 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:39.004 18:12:13 keyring_linux -- common/autotest_common.sh@970 -- # wait 1132067 00:36:39.260 18:12:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1131933 00:36:39.260 18:12:13 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 1131933 ']' 00:36:39.260 18:12:13 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 1131933 00:36:39.260 18:12:13 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:36:39.260 18:12:13 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:39.260 18:12:13 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1131933 00:36:39.260 18:12:13 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:39.260 18:12:13 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:39.260 18:12:13 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1131933' 00:36:39.260 killing process with pid 1131933 00:36:39.260 18:12:13 keyring_linux -- common/autotest_common.sh@965 -- # kill 1131933 00:36:39.260 18:12:13 keyring_linux -- common/autotest_common.sh@970 -- # wait 1131933 00:36:39.825 00:36:39.825 real 0m4.887s 00:36:39.825 user 0m9.105s 00:36:39.825 sys 0m1.351s 00:36:39.825 18:12:14 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:39.825 18:12:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:39.825 ************************************ 00:36:39.825 END TEST keyring_linux 00:36:39.825 ************************************ 00:36:39.825 18:12:14 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:39.825 18:12:14 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:39.825 18:12:14 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:39.825 18:12:14 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:39.825 18:12:14 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:39.825 18:12:14 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:39.825 18:12:14 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:39.825 18:12:14 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:39.825 18:12:14 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:39.825 18:12:14 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:39.825 18:12:14 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:39.825 18:12:14 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:39.825 18:12:14 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:39.825 18:12:14 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:39.825 18:12:14 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:39.825 18:12:14 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:39.825 18:12:14 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:39.825 18:12:14 -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:39.825 18:12:14 -- common/autotest_common.sh@10 -- # set +x 00:36:39.825 18:12:14 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:39.825 18:12:14 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:36:39.825 18:12:14 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:36:39.825 18:12:14 -- common/autotest_common.sh@10 -- # set +x 00:36:41.779 INFO: APP EXITING 00:36:41.779 INFO: killing all VMs 00:36:41.779 INFO: killing vhost app 00:36:41.779 INFO: EXIT DONE 00:36:42.710 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:36:42.710 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:42.710 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:42.710 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:42.710 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:42.710 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:42.710 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:42.710 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:42.710 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:42.710 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:42.710 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:42.710 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:42.710 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:42.710 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:42.710 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:42.710 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:42.710 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:44.085 Cleaning 00:36:44.085 Removing: /var/run/dpdk/spdk0/config 00:36:44.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:44.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:44.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:44.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:44.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:44.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:44.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:44.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:44.085 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:44.085 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:44.085 Removing: /var/run/dpdk/spdk1/config 00:36:44.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:44.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:44.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:44.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:44.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:44.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:44.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:44.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:44.085 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:44.085 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:44.085 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:44.085 Removing: /var/run/dpdk/spdk2/config 00:36:44.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:44.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:44.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:44.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:44.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:44.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:44.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:44.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:44.085 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:44.085 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:44.085 Removing: /var/run/dpdk/spdk3/config 00:36:44.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:44.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:44.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:44.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:44.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:44.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:44.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:44.085 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:44.085 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:44.085 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:44.085 Removing: /var/run/dpdk/spdk4/config 00:36:44.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:44.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:44.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:44.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:44.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:44.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:44.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:44.085 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:44.085 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:44.085 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:44.085 Removing: /dev/shm/bdev_svc_trace.1 00:36:44.085 Removing: /dev/shm/nvmf_trace.0 00:36:44.085 Removing: /dev/shm/spdk_tgt_trace.pid813647 00:36:44.085 Removing: /var/run/dpdk/spdk0 00:36:44.085 Removing: /var/run/dpdk/spdk1 00:36:44.085 Removing: /var/run/dpdk/spdk2 00:36:44.085 Removing: /var/run/dpdk/spdk3 00:36:44.085 Removing: /var/run/dpdk/spdk4 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1000491 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1023379 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1026005 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1029780 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1030780 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1031931 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1035051 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1037334 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1041454 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1041541 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1044303 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1044437 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1044618 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1044961 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1044966 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1046035 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1047222 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1048398 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1049573 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1050749 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1051933 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1055727 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1056073 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1057463 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1058200 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1061841 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1063888 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1067795 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1071053 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1077208 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1081655 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1081657 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1093443 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1093845 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1094256 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1094660 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1095236 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1095642 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1096090 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1096575 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1099064 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1099318 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1103490 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1103661 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1105268 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1110300 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1110305 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1113191 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1114469 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1115883 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1116738 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1118145 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1119019 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1124177 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1124555 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1124946 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1126378 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1126775 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1127170 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1129367 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1129494 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1131041 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1131933 00:36:44.085 Removing: /var/run/dpdk/spdk_pid1132067 00:36:44.085 Removing: /var/run/dpdk/spdk_pid812100 00:36:44.085 Removing: /var/run/dpdk/spdk_pid812834 00:36:44.085 Removing: /var/run/dpdk/spdk_pid813647 00:36:44.085 Removing: /var/run/dpdk/spdk_pid814082 00:36:44.085 Removing: /var/run/dpdk/spdk_pid814773 00:36:44.085 Removing: /var/run/dpdk/spdk_pid814915 00:36:44.085 Removing: /var/run/dpdk/spdk_pid815627 00:36:44.085 Removing: /var/run/dpdk/spdk_pid815643 00:36:44.085 Removing: /var/run/dpdk/spdk_pid815884 00:36:44.085 Removing: /var/run/dpdk/spdk_pid817096 00:36:44.085 Removing: /var/run/dpdk/spdk_pid818116 00:36:44.085 Removing: /var/run/dpdk/spdk_pid818331 00:36:44.085 Removing: /var/run/dpdk/spdk_pid818608 00:36:44.085 Removing: /var/run/dpdk/spdk_pid818808 00:36:44.085 Removing: /var/run/dpdk/spdk_pid819004 00:36:44.085 Removing: /var/run/dpdk/spdk_pid819164 00:36:44.085 Removing: /var/run/dpdk/spdk_pid819320 00:36:44.085 Removing: /var/run/dpdk/spdk_pid819501 00:36:44.085 Removing: /var/run/dpdk/spdk_pid820080 00:36:44.085 Removing: /var/run/dpdk/spdk_pid822429 00:36:44.085 Removing: /var/run/dpdk/spdk_pid822599 00:36:44.085 Removing: /var/run/dpdk/spdk_pid822759 00:36:44.085 Removing: /var/run/dpdk/spdk_pid822777 00:36:44.085 Removing: /var/run/dpdk/spdk_pid823193 00:36:44.085 Removing: /var/run/dpdk/spdk_pid823196 00:36:44.085 Removing: /var/run/dpdk/spdk_pid823716 00:36:44.085 Removing: /var/run/dpdk/spdk_pid823749 00:36:44.085 Removing: /var/run/dpdk/spdk_pid823986 00:36:44.085 Removing: /var/run/dpdk/spdk_pid824043 00:36:44.085 Removing: /var/run/dpdk/spdk_pid824211 00:36:44.085 Removing: /var/run/dpdk/spdk_pid824220 00:36:44.085 Removing: /var/run/dpdk/spdk_pid825019 00:36:44.085 Removing: /var/run/dpdk/spdk_pid825365 00:36:44.085 Removing: /var/run/dpdk/spdk_pid825563 00:36:44.085 Removing: /var/run/dpdk/spdk_pid825732 00:36:44.085 Removing: /var/run/dpdk/spdk_pid825760 00:36:44.085 Removing: /var/run/dpdk/spdk_pid825940 00:36:44.085 Removing: /var/run/dpdk/spdk_pid826101 00:36:44.085 Removing: /var/run/dpdk/spdk_pid826254 00:36:44.085 Removing: /var/run/dpdk/spdk_pid826526 00:36:44.085 Removing: /var/run/dpdk/spdk_pid826687 00:36:44.085 Removing: /var/run/dpdk/spdk_pid826840 00:36:44.085 Removing: /var/run/dpdk/spdk_pid827002 00:36:44.085 Removing: /var/run/dpdk/spdk_pid827268 00:36:44.085 Removing: /var/run/dpdk/spdk_pid827437 00:36:44.085 Removing: /var/run/dpdk/spdk_pid827588 00:36:44.085 Removing: /var/run/dpdk/spdk_pid827798 00:36:44.085 Removing: /var/run/dpdk/spdk_pid828017 00:36:44.085 Removing: /var/run/dpdk/spdk_pid828178 00:36:44.085 Removing: /var/run/dpdk/spdk_pid828335 00:36:44.085 Removing: /var/run/dpdk/spdk_pid828603 00:36:44.085 Removing: /var/run/dpdk/spdk_pid828765 00:36:44.085 Removing: /var/run/dpdk/spdk_pid828923 00:36:44.085 Removing: /var/run/dpdk/spdk_pid829088 00:36:44.086 Removing: /var/run/dpdk/spdk_pid829357 00:36:44.086 Removing: /var/run/dpdk/spdk_pid829523 00:36:44.086 Removing: /var/run/dpdk/spdk_pid829679 00:36:44.086 Removing: /var/run/dpdk/spdk_pid829863 00:36:44.086 Removing: /var/run/dpdk/spdk_pid830067 00:36:44.344 Removing: /var/run/dpdk/spdk_pid832118 00:36:44.344 Removing: /var/run/dpdk/spdk_pid885224 00:36:44.344 Removing: /var/run/dpdk/spdk_pid887724 00:36:44.344 Removing: /var/run/dpdk/spdk_pid894653 00:36:44.344 Removing: /var/run/dpdk/spdk_pid897824 00:36:44.344 Removing: /var/run/dpdk/spdk_pid900432 00:36:44.344 Removing: /var/run/dpdk/spdk_pid900844 00:36:44.344 Removing: /var/run/dpdk/spdk_pid908074 00:36:44.344 Removing: /var/run/dpdk/spdk_pid908083 00:36:44.344 Removing: /var/run/dpdk/spdk_pid908730 00:36:44.344 Removing: /var/run/dpdk/spdk_pid909270 00:36:44.344 Removing: /var/run/dpdk/spdk_pid909928 00:36:44.344 Removing: /var/run/dpdk/spdk_pid910329 00:36:44.344 Removing: /var/run/dpdk/spdk_pid910332 00:36:44.344 Removing: /var/run/dpdk/spdk_pid910597 00:36:44.344 Removing: /var/run/dpdk/spdk_pid910618 00:36:44.344 Removing: /var/run/dpdk/spdk_pid910719 00:36:44.344 Removing: /var/run/dpdk/spdk_pid911284 00:36:44.344 Removing: /var/run/dpdk/spdk_pid911932 00:36:44.344 Removing: /var/run/dpdk/spdk_pid912588 00:36:44.344 Removing: /var/run/dpdk/spdk_pid912990 00:36:44.344 Removing: /var/run/dpdk/spdk_pid912994 00:36:44.344 Removing: /var/run/dpdk/spdk_pid913139 00:36:44.344 Removing: /var/run/dpdk/spdk_pid914128 00:36:44.344 Removing: /var/run/dpdk/spdk_pid915209 00:36:44.344 Removing: /var/run/dpdk/spdk_pid920694 00:36:44.344 Removing: /var/run/dpdk/spdk_pid920975 00:36:44.344 Removing: /var/run/dpdk/spdk_pid923475 00:36:44.344 Removing: /var/run/dpdk/spdk_pid927168 00:36:44.344 Removing: /var/run/dpdk/spdk_pid929214 00:36:44.344 Removing: /var/run/dpdk/spdk_pid935527 00:36:44.344 Removing: /var/run/dpdk/spdk_pid940660 00:36:44.344 Removing: /var/run/dpdk/spdk_pid941959 00:36:44.344 Removing: /var/run/dpdk/spdk_pid942627 00:36:44.344 Removing: /var/run/dpdk/spdk_pid953300 00:36:44.344 Removing: /var/run/dpdk/spdk_pid955503 00:36:44.344 Removing: /var/run/dpdk/spdk_pid980498 00:36:44.344 Removing: /var/run/dpdk/spdk_pid983393 00:36:44.344 Removing: /var/run/dpdk/spdk_pid984458 00:36:44.344 Removing: /var/run/dpdk/spdk_pid985769 00:36:44.344 Removing: /var/run/dpdk/spdk_pid985904 00:36:44.344 Removing: /var/run/dpdk/spdk_pid986043 00:36:44.344 Removing: /var/run/dpdk/spdk_pid986064 00:36:44.344 Removing: /var/run/dpdk/spdk_pid986497 00:36:44.344 Removing: /var/run/dpdk/spdk_pid987809 00:36:44.344 Removing: /var/run/dpdk/spdk_pid988416 00:36:44.344 Removing: /var/run/dpdk/spdk_pid988841 00:36:44.344 Removing: /var/run/dpdk/spdk_pid990447 00:36:44.344 Removing: /var/run/dpdk/spdk_pid990873 00:36:44.344 Removing: /var/run/dpdk/spdk_pid991316 00:36:44.344 Removing: /var/run/dpdk/spdk_pid993761 00:36:44.344 Removing: /var/run/dpdk/spdk_pid997071 00:36:44.344 Clean 00:36:44.344 18:12:19 -- common/autotest_common.sh@1447 -- # return 0 00:36:44.344 18:12:19 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:44.344 18:12:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:44.344 18:12:19 -- common/autotest_common.sh@10 -- # set +x 00:36:44.344 18:12:19 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:44.344 18:12:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:44.344 18:12:19 -- common/autotest_common.sh@10 -- # set +x 00:36:44.344 18:12:19 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:44.344 18:12:19 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:44.344 18:12:19 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:44.344 18:12:19 -- spdk/autotest.sh@391 -- # hash lcov 00:36:44.344 18:12:19 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:44.344 18:12:19 -- spdk/autotest.sh@393 -- # hostname 00:36:44.344 18:12:19 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:44.602 geninfo: WARNING: invalid characters removed from testname! 00:37:16.658 18:12:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:16.658 18:12:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:19.938 18:12:54 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:22.463 18:12:56 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:25.809 18:12:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:28.334 18:13:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:31.605 18:13:05 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:31.605 18:13:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:31.605 18:13:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:31.605 18:13:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.605 18:13:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.605 18:13:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.605 18:13:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.605 18:13:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.605 18:13:05 -- paths/export.sh@5 -- $ export PATH 00:37:31.605 18:13:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.605 18:13:05 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:31.605 18:13:05 -- common/autobuild_common.sh@437 -- $ date +%s 00:37:31.605 18:13:05 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721491985.XXXXXX 00:37:31.605 18:13:05 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721491985.NFSmP7 00:37:31.605 18:13:05 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:37:31.605 18:13:05 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:37:31.605 18:13:05 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:31.605 18:13:05 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:31.605 18:13:05 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:31.605 18:13:05 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:31.605 18:13:05 -- common/autobuild_common.sh@453 -- $ get_config_params 00:37:31.605 18:13:05 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:37:31.605 18:13:05 -- common/autotest_common.sh@10 -- $ set +x 00:37:31.605 18:13:05 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:31.605 18:13:05 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:37:31.605 18:13:05 -- pm/common@17 -- $ local monitor 00:37:31.605 18:13:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:31.605 18:13:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:31.605 18:13:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:31.605 18:13:05 -- pm/common@21 -- $ date +%s 00:37:31.605 18:13:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:31.605 18:13:05 -- pm/common@21 -- $ date +%s 00:37:31.605 18:13:05 -- pm/common@25 -- $ sleep 1 00:37:31.605 18:13:05 -- pm/common@21 -- $ date +%s 00:37:31.605 18:13:05 -- pm/common@21 -- $ date +%s 00:37:31.605 18:13:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721491985 00:37:31.605 18:13:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721491985 00:37:31.605 18:13:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721491985 00:37:31.605 18:13:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721491985 00:37:31.605 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721491985_collect-vmstat.pm.log 00:37:31.605 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721491985_collect-cpu-load.pm.log 00:37:31.605 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721491985_collect-cpu-temp.pm.log 00:37:31.605 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721491985_collect-bmc-pm.bmc.pm.log 00:37:32.169 18:13:06 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:37:32.169 18:13:06 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:32.169 18:13:06 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:32.169 18:13:06 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:32.169 18:13:06 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:32.169 18:13:06 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:32.169 18:13:06 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:32.169 18:13:06 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:32.169 18:13:06 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:32.169 18:13:06 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:32.169 18:13:06 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:32.169 18:13:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:32.169 18:13:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:32.169 18:13:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:32.169 18:13:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:32.169 18:13:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:32.169 18:13:06 -- pm/common@44 -- $ pid=1143100 00:37:32.169 18:13:06 -- pm/common@50 -- $ kill -TERM 1143100 00:37:32.169 18:13:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:32.169 18:13:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:32.169 18:13:06 -- pm/common@44 -- $ pid=1143102 00:37:32.169 18:13:06 -- pm/common@50 -- $ kill -TERM 1143102 00:37:32.169 18:13:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:32.169 18:13:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:32.169 18:13:06 -- pm/common@44 -- $ pid=1143104 00:37:32.169 18:13:06 -- pm/common@50 -- $ kill -TERM 1143104 00:37:32.169 18:13:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:32.169 18:13:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:32.169 18:13:06 -- pm/common@44 -- $ pid=1143133 00:37:32.169 18:13:06 -- pm/common@50 -- $ sudo -E kill -TERM 1143133 00:37:32.169 + [[ -n 709297 ]] 00:37:32.169 + sudo kill 709297 00:37:32.179 [Pipeline] } 00:37:32.197 [Pipeline] // stage 00:37:32.202 [Pipeline] } 00:37:32.220 [Pipeline] // timeout 00:37:32.225 [Pipeline] } 00:37:32.242 [Pipeline] // catchError 00:37:32.248 [Pipeline] } 00:37:32.266 [Pipeline] // wrap 00:37:32.272 [Pipeline] } 00:37:32.288 [Pipeline] // catchError 00:37:32.297 [Pipeline] stage 00:37:32.299 [Pipeline] { (Epilogue) 00:37:32.314 [Pipeline] catchError 00:37:32.315 [Pipeline] { 00:37:32.330 [Pipeline] echo 00:37:32.332 Cleanup processes 00:37:32.338 [Pipeline] sh 00:37:32.635 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:32.635 1143235 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:32.635 1143364 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:32.648 [Pipeline] sh 00:37:32.920 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:32.920 ++ grep -v 'sudo pgrep' 00:37:32.920 ++ awk '{print $1}' 00:37:32.920 + sudo kill -9 1143235 00:37:32.932 [Pipeline] sh 00:37:33.209 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:43.182 [Pipeline] sh 00:37:43.513 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:43.513 Artifacts sizes are good 00:37:43.529 [Pipeline] archiveArtifacts 00:37:43.536 Archiving artifacts 00:37:43.749 [Pipeline] sh 00:37:44.031 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:44.046 [Pipeline] cleanWs 00:37:44.056 [WS-CLEANUP] Deleting project workspace... 00:37:44.056 [WS-CLEANUP] Deferred wipeout is used... 00:37:44.063 [WS-CLEANUP] done 00:37:44.065 [Pipeline] } 00:37:44.087 [Pipeline] // catchError 00:37:44.102 [Pipeline] sh 00:37:44.384 + logger -p user.info -t JENKINS-CI 00:37:44.393 [Pipeline] } 00:37:44.410 [Pipeline] // stage 00:37:44.415 [Pipeline] } 00:37:44.433 [Pipeline] // node 00:37:44.439 [Pipeline] End of Pipeline 00:37:44.477 Finished: SUCCESS